text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Random neuronal ensembles can inherently do context dependent coarse conjunctive encoding of input stimulus without any specific training Conjunctive encoding of inputs has been hypothesized to be a key feature in the computational capabilities of the brain. This has been inferred based on behavioral studies and electrophysiological recording from animals. In this report, we show that random neuronal ensembles grown on multi-electrode array perform a coarse-conjunctive encoding for a sequence of inputs with the first input setting the context. Such an encoding scheme creates similar yet unique population codes at the output of the ensemble, for related input sequences, which can then be decoded via a simple perceptron and hence a single STDP neuron layer. The random neuronal ensembles allow for pattern generalization and novel sequence classification without needing any specific learning or training of the ensemble. Such a representation of the inputs as population codes of neuronal ensemble outputs, has inherent redundancy and is suitable for further decoding via even probabilistic/random connections to subsequent neuronal layers. We reproduce this behavior in a mathematical model to show that a random neuronal network with a mix of excitatory and inhibitory neurons and sufficient connectivity creates similar coarse-conjunctive encoding of input sequences. Pattern or sequence recognition and classification is a well-studied problem in engineering that uses biologically inspired architectures like artificial neural networks, and more recently deep learning networks that have shown promising results in solving such tasks. However, the learning algorithms adopted by these architectures require multiple iterations and modifications of the connectivity weights across all layers of the network. The existence of similar multi-layered learning in the biological neuronal networks for efficient processing of input stimuli and classification of inputs has not been observed yet experimentally. An alternative learning architecture is to have a random neuronal ensemble with a mix of inhibitory and excitatory neurons that is then connected to another layer of perceptron type neurons, in a probabilistic manner, with learning restricted to the final perceptron layer. We describe this further in the schematic in Fig. 1, where a layered neuronal system with probabilistic connectivity at input and output of first layer, is connected to a second layer having neurons equipped with STDP, to solve the problem of input classification without any need for network modification/learning at the input layer. We experimentally validate this architecture by using neuronal ensembles cultured on a multi electrode array, to form the first layer of the Fig. 1. The multi-electrode array allows us to create complex spatio-temporal input stimulation patterns, that get encoded by the neuronal tissue which is then observed as responses at the electrodes for further analysis. We show through modeling and by fitting experimental data that probabilistic connections and a layered architecture as in Fig. 1, can provide a very robust platform to implement context dependent classification. Our data and results show the presence and usefulness of coarse-conjunctive tuning of neurons in the These are connected to the next layer L2 probabilistically. (c) Such probabilistic connections give rise to coarse conjunctive neurons. As an example, Neuron 1 in L2 receives inputs from neurons coding for square, circle, red and blue and activates for the cases red square, blue square and red circle. With several such neurons in L2, a population code is formed. This is highlighted in f. When a red square is presented, neurons 1 and 6 are activated (say population code [1,6]) while for a blue circle, the population code is [5,6]. These codes are linearly separable (when considered as a binary vector in 6 dimensions). With such linearly separable codes, a single neuron in layer 3 (we have a perceptron as a proxy) can learn to decode any one of the unique population code using STDP mechanism. Even if the connection between layer 2 and layer 3 is probabilistic as in d, the code as seen by the perceptron is unique and linearly separable. For example, suppose a neuron in L3 does not receive a connection from Neuron 1 in L2, still the population code as seen by it (as shown in g) is unique for each pattern and it can decode the pattern. With further reduction in probability of connection (50%) as in h, the population code is no longer unique. progress 10 . Cells in EC conjunctively encode position and head position information 11 . Different face features decoded from single neuron recordings in IT shows coarse tuning of neurons 9 . Firing of hippocampal cells which encode spatial map also correlate to task events 10 . It is also suitable for function approximation and generalization by an artificial neural network 12 . Modeling studies suggest that combination of features in the stimulus input can be distinguished by a distribution of activation of many neurons. It is also conceivable that the output from many coarse conjunctive neurons converge to one or few 'output' neurons that in turn control behavior. In the mushroom body of the common fruitfly Drosophila melanogaster, structural layers of the kind illustrated in Fig. 1 exist. Output from ~2000 third order kenyon cells that encode odour stimuli, converge on ~21 structurally distinct olfactory bulb output neurons (OBONs) 13 and a suppression of a single pair of OBON regulates aversive memory associations 14 . However, the way information is encoded and decoded across different layers before it converges on the output neuron is not known. Neuronal cultures on multi-electrode arrays have been previously used to study neuronal networks. The ability to train neuronal cultures has been studied 15 . Different groups have used such cultures to demonstrate processing of spatio-temporal stimuli [16][17][18][19][20][21][22][23] . They have been used as a model to study the network basis of neurological disorders and recently to study the role of neurotransmitters in neuronal network dynamics [24][25][26] . Their activity has been modeled using connectivity maps and hidden markov models 27 . They have been used to construct simple computational systems. However, such systems have not been used to test different hypotheses about the network architectures for computing using neuronal circuits. In this study, we have attempted to understand how coarse encoding arises and how features related to the input are encoded by a distributed system of neurons connected randomly using neurons cultured on multielectrode arrays. First, we show that responses from a neuronal ensemble grown on multi-electrode array show coarse-conjunctive encoding of multiple spatio-temporal inputs and then demonstrate their ability to do context dependent encoding, which can then be decoded/classified robustly using 'perceptrons' as proxy for the output neuron shown in layer 3 (L3) of Fig. 1. The inputs are paired electrical stimuli at different spatial locations, in different combinations separated by a time interval (spatio-temporal pattern), whose physical parameters were fixed much like the sensory stimuli in the cognition experiments where the perception of sensory stimuli with fixed physical features are studied for context dependency. The results show that neuronal ensembles with probabilistic 'random' connectivity can inherently do coarse-conjunctive encoding, without any specific learning or training. We discuss the relevance of such an architecture, where an interplay of random connectivity and layered architecture simplifies the pattern classification tasks. Neuronal culture growth and maintenance was using standard procedures 22,28 . Briefly, dissociated neuronal cell cultures were prepared from hippocampus of 0-2 day old wistar rat pups on 120 MEA from MultiChannel Systems. Micro-dissected hippocampus was digested in papain solution and plated on electrode region of the MEA coated with laminin. The dishes were flooded with 1 ml of medium after the cells had adhered to the substrate, and stored with ethylene-propylene membrane lids in a 65% RH incubator at 37 °C, 5% CO 2 . Methods We used antibiotic/antimycotic drugs to control contamination. Feedings consisted of 50% medium replacement twice per week. The medium was used with glial conditioning (ara-C) after 7 days. The culture dish was placed in a separate incubator which maintained an ambient of 5% CO 2 at 37 °C while doing recordings and stimulations. Recording and Stimulation. We used MEA-2100 System from MultiChannel Systems©, Germany for recording from and stimulating the cultures grown on the MEA. The hardware was used to record signals from 120 channels simultaneously at 50 kHz and to generate stimulus pulses at all electrodes under software control. Analysis. The data was acquired from the device using MATLAB. Spike detection was done on the acquired data for further processing. This required filtering, artifact suppression and appropriate threshold crossing detection which was done on-line using MATLAB. Threshold for each electrode was estimated as 5x standard deviation (estimated using median values) and was applied on the absolute value of the signal. For electrical stimulation we chose the parameters which have been shown to be effective in previous studies 29 . For each stimulus we used a bi-phasic voltage pulse of amplitude 500 mV and a pulse width of 500 µs in each phase. Experimental Protocols. Input Patterns. A spatio-temporal input coding strategy was adopted 22 Output decoding. We defined the output vector from the culture for each pattern as a 120 element binary vector indicating the occurrence of a spike in a 100 ms post stimulus window. jk 120 jk X jk is the output pattern for the culture for the k th presentation of input pattern j. Here s M jk is the spike occurrence indicator for electrode M and is defined as s M jk = 1 if at least one spike occurs in the time window 5 ms to 100 ms after the j th input pattern is presented to the culture k th time. A perceptron is a simple processing element which does a weighted sum of its inputs and generates a binary (1/0) output if the sum is greater than a threshold value. It can be described by the following expression. Here O jk is defined as the output of the perceptron j with a weight vector W j for the k th presentation of input pattern j. The weight vector describes a hyper plane which separates the set of outputs which the perceptron is trained to identify from the rest. These set of weights are learned using the perceptron training algorithm, the delta rule 30 . The decoder is an array of such perceptrons which can be used to assign a class to an output vector. Results We stimulated the culture with 56 spatio-temporal input patterns and recorded the responses. These were generated using 8 electrodes (labeled A, B C D..H) Pairing two at a time with a time delay of 0.5 ms 22 . We defined the first electrode to be stimulated to set the 'context' in which subsequent stimuli are processed. We then looked at responses at each electrode for these patterns and found them to be coarsely tuned with multiple electrodes responding probabilistically to the 56 input patterns (Fig. 2c). With an array of perceptrons, we were able to classify the output codes which showed them to be linearly separable 22 (Fig. 4a). This method has been shown to be equivalent to other classification methods like logistic regression 31 . Figure 3b shows coarse tuning at two electrodes with responses to multiple input patterns. The output response from a single electrode (responses represented by blue dots in Fig. (3b)) cannot distinguish the different input patterns (DF, DG, DH, HA, HB, HC). The probability of the responses at a single electrode show conjunctive and disjunctive behavior based on timing and order of inputs as a result of excitatory and inhibitory connections from the inputs (Fig. 3a). Looking at all the electrodes, we found that a significant number of electrodes show this kind of response leading to distinct population codes (inferred based on them being linearly classifiable by perceptrons). The input patterns become distinguishable as small clusters upon increasing the number of output electrodes (as in example demonstrated using Fig. 3c). A minimum number of output electrodes are thus necessary to separate the input patterns. With 120 electrodes the input patterns were separable and classifiable. Thus coarse-conjunctive coding results in unique population codes. The paired input patterns could be grouped In order to study context dependent grouping of the inputs, we left out one of the patterns within each group where the first stimulus sets the context (e.g. AH in group [AB, AC, AD, AE, AF, AG, AH]) and trained the output perceptrons. To be able to identify the group of input, we had 8 perceptrons. Each perceptron was trained to respond to a presentation of a pattern belonging to a particular input group. When the training was done, one set of patterns was randomly left out (say [AH, BH, CH, DH, EH, FH, GH]). After training, to check the ability of the perceptron to identify a novel input pattern from the network response, we presented the pattern (say AH) to all the perceptrons and evaluated their response. Then using a winner-take-all strategy, the pattern was assigned to the group corresponding to the perceptron that shows highest activation (In the example, this should be A*). This was repeated for other left out patterns (BH, CH etc, 45 samples each) and the classification accuracy for each of these showing a coarse tuning response (relative size of the circle indicates probability of response with response to AH having a probability 1 at both electrodes) to various stimulus patterns. The response is not specific to a particular electrode or a particular pattern. (c) Coarse coding generates distinct codes for different patterns. This shows how six electrodes (selected using Fischer Discriminant Ratio) create unique codes for different groups of input patterns. Each dot corresponds to the probability of firing observed at these electrodes for different stimuli. Consider input pattern corresponding to dots RED(R) and GREEN (G). When only electrodes [E1, E2, E3] are used for decoding (LHS), the coordinate generated for R is [0, 1, 1]. This is true for green dot as well and these two patterns cannot be distinguished. However, when [E4, E5, E6] is also used, the combined coordinate([E1, E2, E3, E4, E5, E6]) generated for R will be [0, 1, 1, 0, 1, 0] whereas for G, this is [0, 1, 1,1, 1, 0], which are now linearly separable. Thus with sufficient number of electrodes, unique descriptions/coordinates are created for every pattern. This is illustrated conceptually in Fig. 1(f-h) were noted. If 80% of the samples of a pattern were correctly grouped, we say that the perceptron layer was able to identify the novel group correctly. The number of such groups was noted. This was repeated by leaving out other set of patterns (say [AG, BG, CG…], [AF, BF, CF…]) and similar analysis was performed. Figure 4b presents an average number of correct groups thus identified by each culture. The fact that the output generated by a pattern AH was grouped into [A*] group instead of [H*] group indicates that the network response is strongly influenced by the first stimulus in the sequence rather than a co-occurrence of A & H. Together with the results that each pattern is distinct (56 patterns were linearly separable and patterns within each group were linearly separable) but also can be grouped (A*, B*..) while being able to classify a novel pattern, shows the ability of coarse-coded conjunctive scheme in neuronal cultures to create unique descriptions suitable for pattern classification and pattern generalization. The ability of the perceptron to do this, shows that the network dynamics and resulting response is such that a neuron in the next layer is able to group inputs correctly.The results on the ability to correctly classify untrained patterns emphasizes that the classification ability is not just due to a mapping to high dimension and demonstrates a 'context' dependent response to the second input and shows the inherent network property to generate such responses. To check if the coding is suitable for probabilistic connections between layers as in the brain, we made the connections between the output electrodes and decoder perceptrons probabilistic and evaluated the classification performance. We mimicked the possible connectivity in a neuronal architecture by randomly connecting a perceptron in the output layer to a fraction of output electrodes (Fig. 1). The performance was robust and degraded gracefully as number of connections were reduced (Fig. 4c). The result indicates that the code generated by coarse-conjunctive neurons is distributed enough to allow a neuron randomly connected to a set of neurons in this layer to learn an arbitrary linearly dependent function. We created a random network model for a mechanistic description of the stimulus responses for spatio-temporal input patterns from neuronal cultures. We viewed the network as a two layer network with an input layer consisting of stimulated neurons and output layer consisting of neurons directly connected to input neurons. This allows us to view cultured network as a layered architecture. Such network structures are used in studying computational capabilities of neuronal systems, brain-inspired computational frameworks and artificial neural networks. We studied how our experimental setup could mimic these computational models. The membrane and synaptic time constants were constrained biologically. Crucially, this points to the possibility of studying computational properties and learning capabilities of biological layered networks using cultures grown on multi-electrode arrays and investigate if computations done using artificial neuronal networks and brain-inspired frameworks can be done using biological systems. The model as shown in Fig. 5, had 120 neurons, with 80% being excitatory and the rest inhibitory. Each neuron was supposed to mimic an electrode and we expected this two layered architecture to explain the observed Fig. 2 were systematically left out and the perceptrons were trained to classify rest of the inputs into different group. The height of the bar indicates the average number of different such hidden patterns (out of 8) that were correctly classified (with greater that 80% accuracy (Chance = 1/8)). This indicates that patterns are grouped into linearly separable groups in higher dimensions based on the first electrode stimulated. (c) The coding of the outputs is such that a perceptron connected probabilistically to a fraction of output electrodes is able to classify the inputs without significant degradation. The curve is averaged over 11 trials in Fig. 4a. It indicates the reduction in number of input classes correctly classified with greater that 80% accuracy as the number of connections each perceptron receives is reduced. The blue trace indicates the loss of accuracy when perceptrons are connected randomly to a fraction of output electrodes (randomized 3 times and mean number of classes calculated). This loss of accuracy can be seen as illustrated in Fig. 1(f-h). The connectivity between the neurons in the model was set using two methods. In the first method, we had a global parameter p which defines probability of connection between any two neurons in the network. We tuned this parameter so that when the input patterns are applied, the output generated by the network has similar overall properties like being linearly separable, sequence dependence and grouping when paired stimuli are applied. We then analyzed the network generated this way for a number of connections received by each neuron to allow it to mimic the observed behavior of the biological network. This provided further validation of schema for computation using layered architectures with random connectivity. In the next method, we estimated the functional connectivity and the connection weights between the input and output electrodes in the neuronal culture by fitting the model outputs for different paired stimuli to match the probability of firing of output electrodes in experiments using a combination of genetic algorithms and gradient descent. The genetic algorithm tuned whether or not a connection exists between neurons while the gradient descent tuned the connection weights. Using this approach, we had a network which had firing probabilities at different electrodes close to the experimental data. The validity of the model-fit was established by using the model to generate output vectors and analyzing them in the same way as experimental data. We then compared the connectivity in this network with that of the network generated by the first method to see whether the number of connections are similar. In the first method, the connection probability is used to manipulate the connectivity, while in the second method the experimental data is used to do so. Since both methods can now recreate the overall experimental results in simulation, we were more confident of the model network explaining the observed behavior and use connection probability as a parameter to further study how connection probability might affect network performance. Using the functional connectivity so obtained, we got further insights about the structure of the network. Figure 6 shows a histogram of number connections between input and output electrodes for a randomly generated connectivity between electrodes and those estimated using fitting the model to the data. They are in agreement to an extent that on an average, an output neuron has a functional interaction with 3 input electrodes for the network in the culture. Also, the higher number of functional connections estimated when the delay between pairing is 0.5 ms indicates that for these networks the dominant cause for generating conjunctive neurons would be through overlap of EPSP's from multiple inputs. We then varied the number of connections between the neurons to see how it affects the classification performance. This is shown in Fig. 7 where the connectivity parameter (p) is varied and the classification and grouping abilities of the model network is studied in the same way as with the biological network. As expected, we found that a minimum degree of random connectivity is required for generating sufficient number of coarse-conjunctive neurons. Interestingly, with the parameter value at 0.1 where the model shows a 100% classification ability for 56 classes, the number of distinct groups possible was around 6 which was similar to the observation across multiple neuronal cultures as presented in Fig. 4. Discussion We discuss the relevance of the above findings in the context of computing mechanisms in the brain. Currently, it is not clear how the functional connectivity in the brain changes and to what degree, in order to learn to perform some action. Also learning of a precise weight at different layers in a neuronal network would be a difficult challenge without accurate feedback signals and would require many repetitions of the training as experienced by researchers working with deep learning networks. It is also not clear how the equivalent error correcting mechanisms would work in a biological neuronal network. Figure 5. Modeling a random neuronal culture to analyze first spike response to a spatio-temporal stimulus pattern. (a) A neuronal network generated with random locations of neurons and distance dependent connection probabilities. Green represent excitatory neurons and red show inhibitory neurons (b) The network viewed as a two layer network after selecting 8 neurons as inputs to analyze first spike response behavior. Connections from input electrodes to a single output neuron is highlighted. Such a partial connectivity is hypothesized to give rise to a coarse-conjunctive population code at the output layer. (c) Model assumed for calculating output firing probability for a paired stimulus at an output electrode. Inputs are stimulated with a delay of t d , a weighted sum is calculated to determine excitation at an output electrode and a sigmoid function is used to calculate output firing probability. In the following discussion, we show that with an interplay of random connectivity and a layered structure, neuronal circuits can solve such problems without needing to learn a large number of synaptic weights. We show how our experiments and modeling studies support this hypothesis for neurons cultured on multi-electrode arrays. Linear separability as a key intermediate step for problem solving. Identification of the correct features from the data and transforming the inputs to a linearly separable space has been established as a key intermediate step in problem solving in machine learning. The 'kernel' in the support vector machine based classification, 'hidden layer' in artificial neural networks, the 'random network' in LSM's, all use this same principle. Once the problem has been thus translated, the required arbitrary function to be learned is a linear combination of these outputs by a single neuron obtained by tuning its input synaptic weights, without needing large scale modification of the preceding input network (Fig. 1). Such learning of a linear combination of inputs, has been shown to be theoretically possible for a biological neuron equipped with STDP mechanism 32 . Specifically, classification and pattern recognition can be seen as a special case of thresholding of these linearly combined outputs. In our study, using 56 input patterns we have shown that the output of the neuronal culture shows such a transformation property (Fig. 4a). The output of the culture, which encodes the input stimulus into a higher dimensional representation, are linearly separable via perceptrons, and learn functions like classification, grouping, sequence detection and novel pattern recognition (Fig. 4b). Previously we have shown that such a biological neuronal network in culture on multi-electrode arrays can translate linearly un-separable inputs to a high dimensional linearly separable space 22 . Conjunctive neurons create linearly separable population codes. The generation of linearly separable population codes can be explained using the schema for a hypothetical network shown in Fig. 1. Experimentally, we show that each neuron in the randomly interconnected network shows a conjunctive code (Fig. 3b). This most likely arises out of pairing of excitatory and inhibitory pre-synaptic inputs when two electrodes in the array are stimulated within a time window (Fig. 3a). Both excitatory and inhibitory connections are required for the neurons to show both an increase and decrease in firing probability as a result of pairing. Such connections also allow the neurons to detect the order of firing. Our results on the neuronal culture show that single neurons receiving random connections show 'conjunctive encoding' which are sensitive to electrodes being stimulated, their timing and the order of pairing, akin to the 'conjunctive neurons' demonstrated in vivo. The additional observation of 'disjunctive encoding' suggests the presence of both excitation and inhibition and their importance in the generation of a variety of conjunctive neurons with arbitrary inputs (Fig. 3a). A linearly separable population code can emerge from a sufficient collection of such randomly connected neurons. This finding emerges from our analysis of the output data, by using random subsets of output electrode data for classification (Fig. 4). These results emphasize the importance and sufficiency of randomly connected neurons to create such population codes without needing any specific learning/training of these networks (Fig. 7). Neurons show coarse conjunctive coding. The results shown in Fig. 3b indicate that single neurons can show a coarse conjunctive response, i.e., each neuron is responsive to pairing of multiple spatio-temporal inputs. The presence of coarse-conjunctive neurons has been shown in the mammalian brain and its importance and advantages have been highlighted in theoretical studies 2 . Coarse-conjunctive codes makes the encoding of the inputs robust as schematically illustrated for hypothetical network in Fig. 1. With such a code, a larger number of patterns can be represented by the network without needing a conjunctive neuron for every feature in the input space (Fig. 1). A decoding neuron in the final layer (L3), only partially connected to such a population of coarse encoding neurons from preceding layers, can still have sufficient information for decoding. Our analysis with random connectivity between the neuronal culture and output layer perceptron demonstrates this to be true for neuronal cultures on MEA (Fig. 4c). Such a scheme is suitable for structured yet probabilistic connections as found in biological neuronal systems. Coarse conjunctive encoding emerges out of random connections without specific learning. Distinct coarse-conjunctive neurons can emerge out of random connectivity between two layers in a network. Our modeling study inspired by our experimental data shows this to be true (Figs 5 and 6). Our analysis with the model also shows that a minimal connectivity is required for generation of such a code (Fig. 7). Conjunctivity arises due to firing of inputs within a time window and depends on the electrodes from which it receives connections, which can be random. The ability to detect the timing and order of firing depends on the inherent time delays in the circuit and the presence of excitatory and inhibitory connections. Neurons show coarse-conjunctive encoding as it receives inputs from more than two electrodes. Each neuron has a distinct coarse-conjunctive tuning curve due to the random nature of connections. A sufficient number of such connections create a set of neurons which can project the inputs into a high-dimensional linearly separable space. The robust nature of the encoding allows the subsequent layer of neurons with partial connectivity to learn an arbitrary function. An intermediate layer receiving random connectivity from a previous layer generates a robust encoding using coarse conjunctive neurons. As a result of such a code, a perceptron, probabilistically connected to this layer is able to identify the input pattern or a group of inputs. By extension, a neuron equipped with STDP should be able to achieve the same. Significantly, to learn a new class, instead of a large-scale change to all the synaptic weights in the network, only the weights of a single target output neuron connected randomly to the preceding coarse conjunctive neurons, needs to be modified. In conclusion, we have shown that random neuronal networks in a culture, generate coarse-conjunctive outputs and unique population codes that are linearly separable for different input sequences without any specific training of the culture. The findings have physiological relevance in giving us some preliminary understanding of how neuronal networks in the brain might sift through information and implicitly classify them intrinsically, via linearly separable, highly redundant, coarse conjunctive encodings of the input stimulus, without needing explicit training/learning at all functional layers during information flow. Such an encoding ability might have a great utilitarian role in simplifying the learning process by needing modification of only a few final neuronal layers, as opposed to the entire network. However, this conjecture requires further experimental analysis of neuronal recordings from the brain in vivo.
6,977.6
2018-01-23T00:00:00.000
[ "Computer Science" ]
A Near-Real-Time Answer Discovery for Open-Domain With Unanswerable Questions From the Web With the proliferation of question and answering (Q&A) services, studies on building a knowledge base (KB) using various information extraction (IE) methodologies from unstructured data on the Web have received significant attention. Existing IE approaches, including machine reading comprehension (MRC), can find the correct answer to a question if the correct answer exists in the document. However, most are prone to extracting incorrect answers rather than producing no answers when the correct answer does not exist in the given documents. This problem is likely to cause serious real-world problems when we apply such technologies to practical services such as AI speakers. We propose a novel open-domain IE system to alleviate the weaknesses of previous approaches. The proposed system integrates an elaborated document selection, sentence selection, and knowledge extraction ensemble method to obtain high specificity while maintaining a realistically achievable level of precision. Based on this framework, we extract answers on Korean open-domain user queries from unstructured documents collected from multiple Web sources. For evaluating our system, we build a benchmark dataset with the SKTelecom AI Speaker log. The baseline models KYLIN infobox generator and BiDAF were used to evaluate the performance of the proposed approach. The experimental results demonstrate that the proposed method outperforms the baseline models and is practically applicable to real-world services. I. INTRODUCTION Formal knowledge bases (KBs), such as the Linked Open Data Cloud (LOD) [1] are used to express and share knowledge by connecting and assigning resources on the Web. The KB is a core element used in question and answering (Q&A) service systems and is considered an important research subject in the field of artificial intelligence as a technology storing and searching for answers to a user query. Previous studies on information extraction (IE) can be classified into three types. The first type requires creating an IE rule by an expert in a specific domain and extract the knowledge when a matching rule pattern is found in the document. Rule-based IE usually exhibits high performance only in specific documents because knowledge is extracted The associate editor coordinating the review of this manuscript and approving it for publication was Ali Shariq Imran . only for specific types of patterns. Consequently, the cost of using domain experts is high, with the burden of continually adding new patterns. The second type requires extracting information based on supervised machine learning and deep learning models. In this model-based IE, the information is only sufficiently extracted when data have the same structure as the training data. However, for data in a different form, developing capable IE is challenging. The third type requires the study of machine reading comprehension (MRC). In this case, the information is extracted under the assumption that there is a correct answer in the document, such as in the Stanford Question Answering Dataset (SQuAD) [2]. This MRC might result in poor performance on unstructured documents on the Web because it cannot guarantee that the retrieved document contains correct answers. IE for KB extension should be capable of dealing with diverse types of documents collected from multiple sources existing on the Web. Therefore, we require a method to decide which source is more reliable than others. Furthermore, we require a measure to judge whether each retrieved document and each sentence in the document contains correct answers for the subject. In this study, we propose a novel IE system that can respond practically to open-domain queries, including unanswerable questions. The proposed method consists of a suitable document collection step, a sentence classification step, a knowledge extraction step, a post-processing step, and a final ensemble step. We empirically confirmed that our proposed method could extract highly-reliable knowledge. The extracted knowledge is converted into triple form and stored in the KB. The KB constructed in this way can be used for artificial intelligence (AI)-based technology in the future. KYLIN infobox generator [3] and BiDAF [4] MRC models were selected as baselines to verify performance. II. RELATED WORK A. INFORMATION EXTRACTION IE is a technique for automatically extracting information from a large number of structured or unstructured documents for a given user query [5]. For example, given a user query (''Leaning Tower of Pisa,'' ''Height''), you can extract the triple (''Leaning Tower of Pisa,'' ''Height,'' ''55.86m'') from the sentence ''Leaning Tower of Pisa, designed by Italian genius architect Bonano Pisano, is a bell tower of 55.86m high and 16m diameter'' in the ''Leaning Tower of Pisa'' Wikipedia page. The extracted information about the user query may be stored in a KB to be used in the question answering system or may be directly provided to the user as an answer. IE can be classified according to the IE methodology and document type [6]. The two IE methodologies are (1) knowledge engineering and (2) automatic training. The knowledge engineering methodology is a grammatical rule based on domain knowledge, which defines a pattern of IE and extracts information when a sentence is found that matches the pattern. The automatic training methodology generates label data to train a model and extract information into the trained model. The knowledge engineering methodology is suitable for extracting information on known patterns. However, extracting new types of information is challenging and requires the efforts of domain experts [7]. Therefore, the automatic training methodology has been the most extensively studied [8]. The three IE document types are (1) unstructured text, (2) structured text, and (3) semi-structured text. Unstructured documents, which are various documents on the Web, are the primary targets in the field of IE. Information is extracted through natural language processing techniques and rulebased systems. Structured documents are those that have a predefined structured format, and information is extracted through a relatively simple technique. Semi-structured documents are documents that do not have a fixed format (e.g., HTML or tables) and extract information through patterns such as tokens and separators that are appropriate for each situation. Because the Web is composed predominantly of texts, IE research has been used in various ways as the chief technology for discovering knowledge on the Web [6]. In previous studies on IE methodology, Etzioni et al. [9] proposed a Web-scale domain-independent IE methodology KNOWITALL that used an ontology KB and rule templates to create IE rules for ontology classes and relationships; it measures the reliability of extraction results based on the Naïve Bayes classifier. Banko et al. [8] developed TEXTRUNNER to extract reliable relationship triples from Web documents after building self-supervised data through dependency parsing. Wu et al. [3] proposed an IE system KYLIN for generating infobox from Wikipedia documents by constructing a training dataset by mapping the values of Wikipedia documents and infobox. KNOWITALL, TEXTRUNNER extracts information by finding a pattern that meets a predefined rule but is limited by the challenge of applying IE to data having a new pattern that does not meet a rule. Moreover, TEX-TRUNNER designed a self-supervised learning model using Wikipedia and developed a system for extracting information, assuming Wikipedia document types. Consequently, TEX-TRUNNER illustrates low performance on heterogeneous data that differs from training data. KYLIN extracts information by selecting a specific model, from several models that exist, based on category-attribute. Therefore, if the document category is misclassified or no model corresponds to the classified category-attribute, it is impossible to extract the information. B. MACHINE READING COMPREHENSION MRC is a task used to test how accurately a machine can understand natural language by asking the machine to answer questions based on a given context [10]. MRC research based on deep learning (i.e., neural MRC) has attracted recent attention. Many studies [4], [11], [12] have reported positive results with recurrent neural networks (RNNs). Herman et al. [11] proposed a ''document-query-answer'' triple generation method using the RNN with attention for the CNN/DailyMail dataset. Wang et al. [12] proposed an MRC model that matches the document with the query and reflects the attention weight in the query. Seo et al. [4] proposed the BiDAF model for improved performance through the bidirectional-attention-based matching of context and queries. Furthermore, several studies [13], [14] use selfattention [15] structures to efficiently reflect context information and reduce computation. Devlin et al. [13] proposed Bidirectional Encoder Representations from Transformers (BERT) using a transformer [15] structure composed of convolution and self-attention. Yu et al. [14] proposed Q&A architecture that reflects local interaction and global interaction using self-attention. However, previous studies primarily target cases in which the correct answer always exists in the document, such as SQuAD, NewsQA [16], and MCTest [17]. Consequently, no procedure exists for judging whether the correct answer is included in the document. Furthermore, applying MRC studies to unstructured documents on the Web is inadequate because the attempts to extract information frequently occur even in documents that do not have correct answers. III. METHODOLOGY The proposed methodology is a system to extract answers from Korean user queries based on subject-predicate (SP) from the unstructured documents collected from multiple Web sources. Figures 1 and 2 depict the inference example and architecture of our IE system, which consists of five steps: seed and train data generation, document selection, sentence selection, IE, and knowledge ensemble. The training data generation step generates data for training the model of each module. Furthermore, all data used to train the model in this study were generated using Wikipedia. The document selection module collects relevant documents from Wikipedia, Naver Encyclopedia, and Naver News Web sources for a given Korean user query and determines whether the collected documents are suitable for extracting information. The sentence selection module separates the document into sentences. It selects a sentence, including answer information about a user query, using three methods: sentence matching rules, predicate-based support vector machine (SVM), and sentence-based convolutional neural network (CNN). The IE module extracts the answer from the sentence selected in the sentence selection module. Next, the information extracted from the above extractor is normalized using post-processing. Finally, in the ensemble module, the results of each model are integrated to extract final results and confidence scores. A. SEED AND TRAIN DATA GENERATION We generated seed data using Korean Wikipedia to create training data for each step model. As depicted in Figure 3, a Wikipedia page contains the main text and an infobox that summarizes the information of the page. The seed data was generated by extracting Title, Attribute, Sentence, Sentence Label, and Value by mapping the text and infobox value of the Wikipedia page. For example, on the ''Leaning Tower of Pisa'' Wikipedia page in Figure 3, the height attribute value of the infobox is 55.86 meters. Accordingly, the label of the sentence containing the height attribute value was set to 1, and the sentence without the attribute value was set to 0. Table 1 illustrates an example of seed data. Based on seed data, we constructed the training data of the sentence classifier in the sentence selection module and extractor in the IE module. We trained the sentence classification model using Attributes, Sentence, and Label columns in the seed data. The data for training each attribute included approximately 10,000 to 60,000 sentences. Moreover, we trained the IE model using Title, Attribute, Sentence, and Value columns in the seed data. Accordingly, after extracting the columns, we tokenized the sentence and tagged the sections of value. The data collected for training the IE model included approximately 1 million sentences. B. DOCUMENT SELECTION MODULE In the document selection module, we create a search keyword with an SP Korean user query to search and collect documents. A search keyword was created for the SP Korean query log. Based on the created search keywords, the relevant documents were collected from Wikipedia, Naver Encyclopedia, and Naver News Web sources, and then documents were selected using the document selection rule. Table 2 summarizes the search keywords, document collection methods, and proper document selection rules for each of the three sources. For Wikipedia and Naver Encyclopedia, because the document focuses on a specific subject, the search keyword is generated using the ''subject'' of the user query. In contrast, Naver News is a type of document in which knowledge of various SPs is mixed. Therefore, for preventing noisy document collection, we used the search keyword generated using both ''subject'' and ''predicate.'' Furthermore, the rules for selecting the proper document from Naver News were set more strictly than those for Wikipedia and Naver Encyclopedia. C. SENTENCE SELECTION MODULE The sentence selection module determines whether the input sentence is suitable for extracting the information from the user query. This module includes a proper sentence classification step that uses keyword matching and a reliability evaluation step that uses a classification model. For documents collected from Wikipedia and Naver Encyclopedia, the proper sentence is judged based on whether the sentence contains a predicate, and the reliability of the sentence is evaluated using SVM and Sentence-CNN. For documents collected from Naver News, the proper sentence is judged according to whether the sentence has both a subject and a predicate, and the reliability of the sentence is 1. When the reliability of the sentence is greater than the threshold, we select this sentence as a relevant sentence for IE. 1) SVM The SVM model [18] calculates the confidence score of sentences classified as proper sentences. We train the SVM model using the data generated by dividing the training data by attributes during the seed data generation section. Each data is tokenized to generate a tf-idf vector and used as the input of the SVM model. During training, the SVM model uses a binary label that contains the infobox value of the statement. Because we train the SVM model using attributes, the model is created with the same number of attributes as in the data. However, if the quantity of data points with a specific attribute is less than 10, the model of this attribute is not created. During testing, the predicted label score of the SVM model was used as the reliability of the sentence. 2) SENTENCE CNN Sentence-CNN [19], which uses a simple CNN with one dimension (a convolution filter) [20], is a useful model for text classification. We used the Sentence-CNN model to calculate the confidence score of sentences classified as proper sentences. In contrast to the SVM model, which is divided by attributes, we trained a Sentence-CNN model using the complete dataset. Accordingly, the model can calculate the score regardless of the specific attribute. During training, the Sentence-CNN model uses a binary label, based on whether it contains the value of the infobox of the sentence. During testing, the predicted label score of the Sentence-CNN model is used as the reliability of the sentence. D. INFORAMTION EXTRACTION MODULE The IE module extracts answers from selected sentences using the sentence selection module. The model used for IE is as follows: IE Model-Predicate-Based(Predicate) (IEM-PB(P)): IEM-PB(P) is a predicate-specific IE model that learns one extractor per predicate. Each extractor is a bidirectional long short-term memory (LSTM) and conditional random field (BiLSTM-CRF)-based model that takes both sentence and predicate as inputs. IE Model-Predicate-Based(All) (IEM-PB(A)): IEM-PB(A) is a general-purpose IE model that learns one extractor with all data. It is a BiLSTM-CRF-based model that takes both sentence and predicate as inputs. IE Model-SP-based(All) (IEM-SPB(A)): IEM-SPB(A) is a general-purpose IE model that learns one extractor with all data. It is a BiDAF-based model that takes the sentence, subject, and predicate as inputs. During the test, IEM-PB(P) operates only when the predicate of the user query matches, whereas other extractors always work on all user queries. The features used as inputs of the three extractors are presented in Table 3. Based on the input, a maximum of 9 outputs can be created from three sources (Wikipedia, Naver Encyclopedia, Naver News) and three models (IEM-PB(P), IEM-PB(A), and IEM-SPB(A)) per query. Then, the result is passed through the postprocessing and knowledge ensemble modules. IEM-PB(P) and IEM-PB(A) were designed based on the BiLSTM-CRF model. Figure 4 illustrates the BiLSTM-CRF structure used in this study. BiLSTM-CRF [21] was created by combining BiLSTM and CRF. LSTM [22] is a modified structure of RNNs [23] that overcomes RNNs gradient vanishing and explosion problems. LSTM is commonly used for sequential data. Recently, it has been used for many natural language processing (NLP) tasks. The mathematical representation of the LSTM model is as follows: where σ is the sigmoid activation function. tanh is the hyperbolic tangent function. x t , i t , f t , and o t are the unit input, input gate, forget gate, and output gate at time t. W and b are the trainable weights and biases present at each gate. c t is the input of the current state, and c t is the update state at time t. h t is the output at time t. Finally, we obtain an output vector (h 0 , h 1 , . . . , h t ). However, LSTM only considers the forward information. In sequence tagging, it is necessary to consider forward and backward information simultaneously. In this study, the BiLSTM model [24] was used. The BiL-STM architecture is a method of concatenating the context representation of forward-LSTM and backward-LSTM in the reverse direction. When the forward context representation vector is − → h t and the backward context representation vector is The final tagging can be extracted with this created context, but the dependencies between the tags are essential to the tagging problem. We added CRF as the last layer. The CRF model [25] is designed to reflect the dependencies of adjacent labels. In this study, the CRF model was used after BiLSTM. BiLSTM reflects bidirectional context information, and CRF finds the optimal tag path from all possible tag paths that consider label dependencies. For a given sentence X , the quantitative definition of the probability that the prediction result is as follows: p(y|X) = e s(X,y) y e s(X, y) where T represents the scores of any two adjacent labels, T y i−1 is the score from the successful transfer of the label y i−1 to the label y j , and P i,y i is the confidence score of the y ih label of the character c i . In the training phase, the objective function (8) maximizes the log-probability of the correct tag sequence. Then, we compute the probability and output y of the label sequence using the Viterbi algorithm [26]. The output y of the label sequence is as follows: 2) BIDAF(IEM-SPB(A)) The IEM-SPB(A) model was designed by modifying BiDAF and can reflect both the input context and user query. Both input context and user query have been encoded using the general BiLSTM structure, and the two are fused using the Attention Flow Layer of BiDAF. The Attention Flow Layer obtains the attention weight in the Context direction from the query and generates the Query2Context vector based on a weighted sum with the context vector sequence. In the opposite direction, the Attention weight is calculated from the context to query the direction for each step of the statement, and a weighted sum with the query vector sequence is calculated to obtain the Context2Query vector for each step. Before calculating the attention weight, we first calculate the similarity matrix as follows: where α is a trainable scalar function that encodes the similarity between its two input vectors, H :t is the t-th column context vector of H, U :j is the j-th column vector of U, The Query2Context attention is: whereh is tiled T times across the column, thus producing H. Moreover, the contextual embeddings and the attention vectors are combined to yield G, defined by: where G :t is the t-th column context vector, and β is a simple concatenation: The attention vector G is used as the input to the BiLSTM layer. And then, we used a fully-connected layer and the softmax function to predict the final label. Let M = (m 1 , m 2 , . . . , m t ) be the output vector of the BiLSTM layer. Then, the final label m t at time t is calculated as follows: where W and b are trainable weights and biases for the softmax layer. Figure 5 illustrates the structure of the model used in this study. 3) POST-PROCESSING Post-processing used to supplement the information derived based on the predefined unit dictionary before combining the results extracted from the three models. For example, if you obtain a result of ''41,000'' for an input such as (''the Great VOLUME 8, 2020 Wall,'' ''length''), the post-processing module adds ''km'' units to the output and changes the result to ''41,000km.'' This approach only succeeds when the result is included in the unit information and does not add any additional information. Information about the query output from the IE model is returned through the post-processing module when the unit information is missing. In this study, the response unit information for the predicate was defined in advance for 10 query predicate categories (e.g., length, weight, speed, and size). 4) KNOWLEDGE ENSEMBLE The knowledge ensemble is the final step in generating the final answer based on the results of the IE module and postprocessing. In this step, two knowledge ensemble methods were performed. The first was the simple soft computing method-Simple Ensemble Knowledge Extraction Model (SEM)-which summates scores by answer and extracts the answer with the highest score as the final knowledge. Because no additional learning is required, very few resources are needed, and the final knowledge can be effectively extracted. The second method-a Neural Ensemble Knowledge Extraction Model (NEM)-uses neural networks as a predicate-based neural weight summation method. NEM is designed with the assumption that each knowledge extractor performs more accurately for a particular predicate. In the neural network, the knowledge score extracted from each model and predicate information are input to generate a new score that reflects the weight of each model. Then, the score is summed by knowledge to extract the knowledge with the highest score as the final knowledge. A total of 10,000 pairs of queries and labels were sampled in the seed data to train the model. Then, the 10,000 pairs of queries were tested for each knowledge extraction model, and the scores and labels for each knowledge were used as training data for NEM. This method can extract the final knowledge efficiently using each model, demonstrating superior performance to SEM, which extracts knowledge by linear weight summation. Figure 6 illustrates the architecture of the ensemble model NEM. IV. EXPERIMENTS A. DATASET For measuring the performance of our IE system, SKTelecom was provided with a portion of the query requesting the property of an entity among the actual user queries of the AI speaker. Each query is divided into SP entity-property, approximately 200,000 queries. However, because a person must directly annotate, we selected 400 test queries by random sampling. And we built a test dataset by collecting approximately 2,800 unstructured documents from multiple Web sources for 400 test queries. An example of the Korean user query log of the SKTelecom AI speaker is presented in Table 4. B. EVALUATION METRICS In this study, the precision, recall, and F1 score for each positive and negative condition were calculated to produce a quantitative value. Moreover, the performance was evaluated by deriving accuracy to confirm the overall performance of the system. Table 5 illustrates the confusion matrix for the evaluation of IE performance. True positive correct (TPC) and true negative (TN) are the correct answers. And true positive incorrect (TPI), false positive (FP), and false negative (FN) are the incorrect answers. The resulting confusion matrix is then used to calculate Table 6-the IE performance evaluation formula. C. EVALUATION The performance of the proposed system was evaluated for 400 test queries-for the positive condition, negative condition, and accuracy. If the IE score is lower than the threshold, the model does not extract the answer. The default threshold for IE models is 0.5. We evaluated our proposed models- SEM, NEM, and NEM(0.9)-with a threshold of 0.9. Furthermore, BiDAF and KYLIN were compared with the proposed model as baselines. Tables 7,8, and 9 present the performance of each source (Wikipedia, Naver Encyclopedia, and Naver News). Table 10 illustrates the performance of each model for all sources. Table 7 presents the results from Wikipedia. In positive condition, KYLIN works only on a specific predicate, so it has high precision but low recall. For BiDAF, there is no procedure for judging whether a correct answer is included in a sentence. Consequently, attempts to extract information from documents that do not have correct answers frequently result in low performance. The proposed models SEM, NEM, and NEM(0.9) improve the F1 score by 29.4, 34.6, and 28.0, compared with KYLIN. Furthermore, for the negative condition and the overall performance accuracy, the proposed models also show high performance. For the Naver encyclopedia in Table 8, the overall performance is similar to that of Wikipedia because Wikipedia and Naver encyclopedias have the same format as an encyclopedia. Compared with KYLIN, the proposed models SEM, NEM, and NEM(0.9) improve the F1 score by 27.8, 31.3, and 28.0 for the positive condition, and by 17.3, 19.8, and 13.5 for the negative condition. Table 9 presents the performance for the Naver News source. KYLIN exhibits very low recall for the positive condition because KYLIN extracts answers only from data similar to the training sources and does not extract answers from heterogeneous data such as news sources. Compared with the baselines, our proposed models demonstrate higher performance for all metrics. The results confirm that the proposed models in this study work well with heterogeneous documents. Table 10 presents the performance of the total source. From the results of the experiment, the proposed models achieved significant improvements for both conditions. Compared with KYLIN, SEM improved the F1 score for the positive and negative conditions by 38.3 and 21.2, NEM improved the performance by 44.3 and 21.2. For AI speakers in the field, it is dangerous to deliver incorrect results to the user. Therefore, it is essential to maintain high precision for the positive condition and high recall for the negative condition. For KYLIN, the performance was 85.9 (49/57) and 100 (126/126). However, KYLIN answered only 57 out of 400 queries. Therefore, the accuracy is very low, at 43.7 (175/400). In contrast, for NEM(0.9), the precision for the positive condition, recall for the negative condition, and accuracy reached 90 (99/110), 96 (121/126), and 55 (220/400). These results represent high performance for practical use in AI speakers. Furthermore, in SQuAD 2.0 [27], a public dataset containing unanswerable questions, the F1 scores of the latest models QANet and DocQA [27] are 53.2 and 67.6, and the accuracies are 56.9 and 65.1. Although these conditions differ from those of the dataset used in the proposed model, the NEM model demonstrates higher performance despite the results in the open domain. These results confirm that 1) the integration of the document selection module and sentence selection module increased the reliability of the answer, and 2) even if the correct answer does not exist in the document, the proposed model performs accurately. D. SAMPLE QUERY TEST For evaluating the performance of the proposed system in this study, 2,000 sample queries were additionally extracted from queries provided by SKTelecom to evaluate the accuracy of each threshold. A total of 1,111 answers were extracted from 2,000 sample user queries, with 713 correct answer queries. Table 12 presents the accuracy of the proposed system by threshold. Similar to the previous 400 query tests, when the threshold was changed to low (default) and high (0.9), the accuracy was 64.17 and 94.30, respectively. Furthermore, the average time required to extract answers for each query in the proposed system is 9 seconds, and the general time required for excluding outliers is within 5 seconds. Based on these results, we can estimate the generalization performance of this system; the possibility of application to commercial systems was confirmed. Furthermore, for approximately 200,000 Korean user queries provided by SKTelecom, we have completed collecting answers. Although we did not verify whether all answers were correct, based on the results of the sample query test, we believe that the answers are meaningful data and can be used in other systems. V. CONCLUSION AND FUTURE WORKS In this study, we proposed a novel IE system that can respond practically to open-domain queries, including unanswerable questions. With the proposed system, we alleviate the lowperformance problem in terms of specificity, which is likely to occur when previous approaches answer open-domain unanswerable questions based on unstructured Web environments. We validated our approach by constructing an evaluation dataset by annotating the Korean user query dataset of the SKTelecom AI speaker and confirmed the effectiveness of the knowledge extraction of the proposed system. However, opportunities remain to improve the performance of the system and address limitations. In future work, we plan to improve our system's performance in the two directions. First, we expect additional benefits from using ELMO [28] and BERT, a recent pretrained language model. Second, a methodology to select or expand a new knowledge source is likely to produce a correct answer when it is impossible to extract the correct answer from the given web sources. MINTAE KIM received the B.S. degree in industrial engineering from Yonsei University, in 2016, where he is currently pursuing the Ph.D. degree in industrial engineering. His main research interests include natural language processing, recommendation systems, and machine learning. SANGHEON LEE received the Ph.D. degree in industrial engineering from Yonsei University, in 2019. He is currently studying the natural language understanding module of chatbot builder. His main research interests include dialog systems, natural language processing, and machine learning. YEONGTAEK OH received the M.S. degree in industrial engineering from Yonsei University, in 2019. He is currently a Data Scientist with SK Hynix. His main research interests include semiconductor manufacturing process, natural language processing, and machine learning. HYUNSEUNG CHOI received the M.S. degree in industrial engineering from Yonsei University, in 2019. He is currently a Data Scientist with SK Hynix. His main research interests include artificial intelligence technology, natural language processing, and computer vision. WOOJU KIM received a Ph.D. degree in Operation Research from KAIST, Korea, in 1994. He is currently a Professor at the School of Industrial Engineering, Yonsei University. His main research interests include natural language processing, reliable knowledge discovery, big data intelligence, machine learning, and artificial intelligence. VOLUME 8, 2020
7,071.4
2020-01-01T00:00:00.000
[ "Computer Science" ]
A Dual-Branch Network for Diagnosis of Thorax Diseases From Chest X-Rays Automated chest X-ray analysis has a great potential for diagnosing thorax diseases since errors in diagnosis have always been a concern among radiologists. Being a multi-label classification problem, achieving accurate classification remains challenging. Several studies have focused on accurately segmenting the lung regions from the chest X-rays to deal with the challenges involved. The features extracted from the lung regions typically provide precise clues for diseases like nodules. However, such methods ignore the features outside the lung regions, which have been shown to be crucial for diagnosing conditions like cardiomegaly. Therefore, in this work, we explore a dual-branch network-based framework that relies on features extracted from the lung regions as well as the entire chest X-rays. The proposed framework uses a novel network named R-I UNet for segmenting the lung regions. The dual-branch network in the proposed framework employs two pre-trained AlexNet models to extract discriminative features, forming two feature vectors. Each feature vector is fed into a recurrent neural network consisting of a stack of gated recurrent units with skip connections. Finally, the resulting feature vectors are concatenated for classification. The proposed models achieve state-of-the-art performance for both segmentation and classification tasks on the benchmark datasets. Specifically, our lung segmentation model achieves a 5-fold cross-validation accuracy of 98.18$\%$ and 99.14$\%$ on Montgomery (MC) and JSRT datasets. For classification, the proposed approach achieves state-of-the-art AUC for 9 out of 14 diseases with a mean AUC of 0.842 on the NIH ChestXray14 dataset. I. INTRODUCTION C HEST X-ray (CXR) is the most widely used noninvasive imaging technique for screening and diagnosing several thorax diseases, including pneumonia, cardiomegaly, and atelectasis. A CXR may contain more than one abnormality [1] and accurate diagnosis of these abnormalities relies heavily on the expertise of the trained medical professionals. The diagnosis of thorax diseases from CXR with naked eyes is time-consuming, and these diagnostic findings are prone to errors [2], especially in the presence of noise. Therefore, it is desirable to have an automated diagnostic system that aids in diagnosis of these diseases. Although a few methods have been proposed [1], [3], [4], more effort should be focused on performance enhancement to meet the standards required for their deployment in clinical settings. In the recent literature, the methods employed for computeraided diagnosis (CAD) can be broadly categorized into two categories. The CNN based methods [3] involve extraction of deep features that carry discriminatory information about multiple abnormalities using classical CNNs [5], [6], [7]. Such methods have also been developed for related tasks like the diagnosis of COVID-19 from CXR [8]. Generally, these methods do not perform well for the detection of small abnormalities like nodules. This may be because the extracted features do not carry sufficient information due to the limited spatial extent of such abnormalities in CXR. To improve the performance, researchers have explored attention-guided approaches [9], [10] to focus the model's eye on the suspicious regions in CXRs. The second category includes methods [11], [12] that utilize medical knowledge (such as pathology interdependence and co-occurrence) by explicitly capturing dependencies among multiple labels/pathologies. These methods generally work well for multi-label CXR classification. The primary sites for thorax abnormalities are the lungs. Therefore, the lung region in CXRs can be analysed for the detection of severe thorax conditions like pneumothorax, emphysema, effusion, and nodules. Recently, a few studies [13], [14], [15], [16], [17] have demonstrated the usefulness of lung segmentation methods for assisting medical professionals in identifying the suspicious regions. Focusing only on lung regions for CXR classification using the lung segmentation masks has been shown to improve the detection performance for abnormalities like nodules, pneumothorax, and emphysema [18], [19], [20]. A review of the literature indicates that there has not been any work that explored the combination of features extracted from the lung region as well as the entire CXR image, at the same time leveraging deep learning for multi-label CXR classification. This motivated us to explore a dual-branch network based approach that uses both local branch and global branch features for automated diagnosis of thorax diseases. The idea behind focusing on both global and local (the segmented lung) regions in our approach is that while most thorax diseases are limited to the lung region, conditions like cardiomegaly are better characterized by contextual features around the lungs. Cardiomegaly refers to an enlarged heart condition, which compresses the lungs resulting in breathlessness to the individual. In this case, smaller size of the lung regions and increased gap between the lungs are the crucial features for the diagnosis. The key contribution of this work can be summarized as follows: 1) A novel U-Net based segmentation model that extracts lung regions from CXRs. 2) A dual-branch network consisting of two pre-trained AlexNet models and a recurrent neural network block that learn effective representations for multi-label classification of CXRs. 3) This work advances the state-of-the-art in the multi-label classification of thorax diseases from CXRs, a significant step towards developing a CAD system for deployment in clinical settings. The rest of this paper is organized as follows: Section II presents the related works, Section III describes the proposed framework, Section IV presents the details of the dataset, our experiments and discussion. Finally, Section V concludes the paper. Table I presents a complete list of abbreviations used in this paper. A. Lung Segmentation Most previous works on lung segmentation in CXR images relied on traditional features like texture, shape, contour to design rule-based methods [21], [22]. With recent advancements in deep learning, CNN-based methods have also been developed for this task. Souza et al. [17] and Maity et al. [14] proposed two different segmentation models for lung segmentation. Their models were trained and tested on the MC [23], and JSRT datasets [24]. Tang et al. [13] proposed a segmentation network named XLSor, which was designed using criss-cross attention modules for extraction of contextual information in various directions around each pixel. The authors have also introduced the NIH dataset for segmentation task. Eslami et al. [25] proposed a multi-task generative adversarial network named MTdG that segments anatomical structures in CXRs and produces rib suppressed images. Singh et al. [26] designed a segmentation model named Deep LF-Net for segmenting lungs in CXRs. Their model integrates DeepLab architecture with custom designed atrous convolution module. B. Classification For multi-label CXR classification, some of the recent studies have investigated the use of classical CNNs. These methods generally outperform handcrafted feature-based methods. Wang et al. [3] investigated the performance of several pre-trained models for classifying CXR images from the ChestX-ray14 dataset [3]. Their study indicates that ResNet outperforms the other models considered in their study. Ma et al. [27] presented a model named ChestXNet that enhances the classification performance. This model was developed by fine-tuning the pre-trained model − DenseNet121 [6]. A few studies have also explored attention mechanisms and have achieved enhanced performance for CXR classification. Guendel et al. [23] employed DenseNet121 for CXR classification through transfer learning. To further enhance the performance of individual disease diagnosis, Tang et al. [28] proposed a multi-task framework for simultaneous classification and localization of thorax diseases. They employed attentionguided curriculum learning scheme to achieve performance enhancement in disease localization and classification. Guan et al. [4] designed a CNN named CRAL with a class-specific attention learning scheme for multi-label classification of CXRs. Xi et al. [29] proposed a weakly supervised algorithm with hierarchical attention mining for localization of CXR abnormalities and their classification. Chen et al. [12] proposed a graph convolutional neural network with label co-occurrence learning framework for thorax disease classification. Chen et al. [10] designed a model named Lesion Location Attention Guided Network (LLAGnet) for CXR classification. In addition to the features from the full CXRs, it uses features from lesion locations to enhance the classification performance. III. PROPOSED APPROACH An overview of the proposed dual-branch model for automated diagnosis of thorax diseases from CXR is shown in Fig. 1. The model takes a CXR image as input, which is processed by two modules, namely, the lung region segmentation (LRS) module and the classification module. While the LRS module segments the lung region from the input CXR, and the classification module has two branches to extract deep features from the segmented lungs and the entire CXR image. The extracted local and global branch features are then fused and fed into a dense layer for classification. In the following sub-sections, we present the details of the LRS and the classification modules. A. Lung Region Segmentation In the proposed approach, the lung regions in the input CXR are segmented using a novel network named R-I UNet. The segmented lung regions undergo post-processing to reduce falsenegatives in the R-I UNet predictions. 1) R-I UNet: The proposed R-I UNet has a four-level segmentation architecture, which is shown in Fig. 2. This semantic segmentation network is composed of three paths, namely Encoder, Decoder and Bridge. The encoder generates a compact representation for the input CXR. The decoder recovers pixel-wise classification from the encoded input. The bridge acts as a connection between the encoder and the decoder. The basic building block of this segmentation network is a Residual-Inception (R-I) block and all the paths are created using R-I blocks. 3 shows the structure of a R-I block, which is inspired by Inception and Residual modules. It aggregates multi-scale features extracted using kernels of different sizes. This helps increase the width of the network and make the model learn more distinctive features [30]. The proposed block is different from the original Inception-Residual block [31], [32] and is designed to take advantage of both multi-scale and hierarchical feature learning schemes. Additionally, introducing a skip connection in R-I block leads to faster convergence of the model. Each convolutional layer in the proposed block is followed by a batch normalization (BN) layer except for the bottle-neck layer. The output of a R-I block can be formulated as follows: ))))))) In the above Equation, C n×n represents convolution with a kernel of size n × n, B N represents the batch normalization, and I N represents the input. As can be seen in Fig. 2, the proposed R-I UNet has four R-I blocks in its encoder path. In each of these blocks, a stride of 2 is used in the first convolutional layer to obtain a down-sampled feature map. Correspondingly, the decoder path is composed of four R-I blocks. The upsampled feature map from the lower level and the feature map at the same level from the corresponding encoder path are concatenated and fed into each of the R-I blocks in the decoder. At the end of the decoder path, a 1 × 1 convolutional layer with a sigmoid activation function is employed to obtain the desired segmentation output. Our lung segmentation network is trained using a dice coefficient-based loss function, which measures the shape similarity between the ground truth and the predicted lung masks. Specifically, the segmentation loss is defined as follows: Here, N s and G s represent the predicted and the ground truth segmentation mask, respectively. 2) Post-Processing: The predicted segmentation mask typically has a considerable number of false positive and false negative predictions. To reduce these false predictions, we have used a set of morphological operations. Specifically, we have performed opening and area filtering to reduce false positives while retaining the two largest objects (lungs), followed by closing to reduce false negative predictions. The structuring elements used for opening and closing operations are cross and ellipse of size 5 × 5 and 7 × 7, respectively. B. Classification As can be seen in the structure of the proposed model presented in Fig. 1, the classification module consists of two parallel branches namely, global and local branch that extract features useful for discrimination. While the global branch extracts deep features from the entire CXR image, the local branch focuses on the two segmented lung regions for feature extraction. The input image to the local branch is of the same size as full CXR, with its non-lung pixels set to zero. Each of the two branches consists of a pre-trained AlexNet followed by a gated recurrent unit (GRU). The features extracted by the two branches are concatenated and passed to the final dense layer activated with the sigmoid function for prediction. In this work, we have used transfer learning technique for AlexNet. Specifically, we have replaced its dense layers with a single dense layer consisting of 14 neurons with ReLU activation function. The output of this layer is fed into a GRU block. A GRU is an improved version of the standard RNN, which is a kind of feed-forward neural network primarily used for sequence learning tasks. In the proposed approach, a GRU block formed using 5 GRUs with skip connections help exploit feature dependencies [33], thereby enhancing the classification performance of the model. The structure of our GRU block is shown in Fig. 4. The skip connections not only help to overcome the problem of vanishing gradients but also make the loss function less chaotic, making its minimization easier [34], [35]. The processing in a single GRU can be mathematically represented as follows: In (3)- (6), x t represents the input vector, H t represents the output vector,H t is the candidate activation vector, Z t is the update gate, R t is the reset gate, b, and U, W represent the vector and parameter matrices, respectively. The outputs of the two GRU blocks that are part of the global and local branches are concatenated to form a 28-dimensional feature vector. This feature vector is then fed into another dense layer consisting of 14 neurons with the sigmoid activation function to generate probability scores for classification. We have frozen the convolutional base of AlexNet while training. Its newly added dense layer along with the residual GRU and the final dense layer are jointly trained using the focal loss [36], which is defined as follows: In the above equation, p y is the probability predicted for each class by the model, α is the weighting factor, and γ is the focusing parameter. In this work, α, and γ are empirically set to 0.5, and 3, respectively using validation set. We have employed the focal loss function primarily because it focuses on hard samples, resulting in a reduced number of misclassified samples compared to the standard cross-entropy loss. The steps involved in processing a CXR at inference time are detailed in Algorithm 1. A. Datasets In this work, we have used three publicly available datasets, namely, JSRT, Montgomery (MC), and NIH ChestXray14, for performance evaluation. While the first two datasets have been used to evaluate the segmentation model, ChestXray14 has been used to evaluate the proposed dual-branch network based framework. A detailed overview of these datasets is presented below. The JSRT dataset [24] is created by the Japanese Radiolog- Extract deep features (D g ) using pre-trained AlexNet 5: Capture long-term feature dependencies in D g using Residual GRU block and generate F g 6: Run R-I UNet and segment lung regions (I l ) 7: Extract deep features (D l ) using pre-trained AlexNet 8: Capture long-term feature dependencies in D l using Residual GRU block and generate F l 9: Concatenate F g , F l 10: Predict probabilities 11: End pixels. Out of these 247 CXRs, 90 CXRs are of healthy lungs without any abnormalities, and the rest 154 CXRs contain lung nodules. The ground truth masks for lung segmentation in CXRs of the JSRT dataset are provided in [37]. The Montgomery County dataset (MC) [23] is created by the Department of Health and Human Services of Montgomery County, Maryland. This dataset contains 138 frontal CXR images of 4020 × 4892 or 4892 × 4020 pixels. There are 80 CXRs of healthy people and 58 CXRs of patients infected with Tuberculosis. This dataset also contains segmentation ground truth annotated by experienced radiologists. ChestXray14 dataset [3] consists of 112,120 frontal CXR images of 30,805 patients collected from 1992 to 2015. Among them, 51,708 CXRs are categorized into 14 thorax abnormalities, and the rest are labeled as "no findings". The images in this dataset are of 1024 × 1024 pixels with 8-bit depth. The official dataset split is created by randomly splitting the data on the patient level into train (∼70%), validation (∼10%), and test partitions (∼20%) while ensuring that all images belonging to a patient are included in only one of these sets [3]. B. Performance Metrics We have used the following metrics for evaluating the proposed lung segmentation network: Jaccard score (JS) = TP FP + TP + FN In the above equations, TP represents the number of pixels that are correctly classified as belonging to the lung region (true positives), TN represents the number of pixels that are correctly classified as belonging to the background (true negatives), FP and FN represent the number of pixels that are wrongly classified as belonging to the lung region (false positives) and background (false negatives), respectively. As with the existing methods, we have computed area under the ROC curve (AUC) to evaluate the performance of the proposed dual-branch network based CXR classification framework. Specifically, we have computed AUC for each class and compared with the existing approaches. C. Training Strategy As mentioned previously, the proposed segmentation model − R-I UNet has been evaluated on JSRT and Montgomery dataset. We have adopted the standard five-fold cross validation protocol for evaluation. The model is trained for 200 epochs with a batchsize of 16 and the initial learning rate set to 0.0005, which is reduced by a factor of 20 for every 40 epochs. The proposed classification framework has been evaluated on ChestXray14 dataset. We have used the official dataset splits provided as part of the dataset. During training, the input CXRs are resized to 224 × 224 pixels. Our model is trained for 150 epochs with a batch-size of 8 and the initial learning rate set to 0.0003, which is reduced by a factor of 15 for every 20 epochs. We have used the SGD optimizer with a momentum of 0.7. All our experiments have been performed using the Keras framework on Google Colab with a single 16 GB P100 GPU. Table II summarises hyperparameters of our segmentation and classification models. All of these hyperparameters are tuned using validation sets. To avoid overfitting, we have employed the early stopping strategy during training. Specifically, if the loss is not improving for ten epochs, the training is stopped automatically, and the best model weights are restored. 1) Segmentation: In this section, we present the performance of the proposed R-I UNet on the two datasets. Table III TABLE III FIVE-FOLD CROSS-VALIDATION PERFORMANCE OF OUR LUNG SEGMENTATION MODEL TABLE IV COMPARISON WITH STATE-OF-THE-ART SEGMENTATION MODELS presents performance of the individual models in each iteration of the five-fold cross validation. This table also presents mean performance estimates. To benchmark the performance of the proposed segmentation model, we have considered a set of state-of-the-art segmentation models, namely, UNet++ [38], DeepLabV3+ [39], Mask R-CNN [40], and UNet [41]. We have followed the previously described strategy while training these models. The performance of these models on JSRT and MC datasets is presented in Table IV. The results presented in Table IV indicate that the proposed R-I UNet provides improved lung segmentation results on both the datasets. To ascertain the performance of our model, we have compared the lung segmentation results qualitatively. Fig. 5 shows the results achieved by different models on five CXRs. While lungs are clearly visible in the first and the second input CXRs, the third image is a challenging case with low contrast between lungs and neighboring region. The fourth and fifth input CXRs are even more challenging as parts of lungs are not clearly visible due to certain medical conditions. Our qualitative analysis indicates that all of these models achieve good segmentation results on the first two CXRs with the predicted lung contours being close to the ground truth. On the third CXR, only our model and DeepLabV3+ achieve satisfactory results. On the fourth and the fifth CXR, all state-of-the-art models achieve poor results with high false negative predictions. On the contrary, our model's predictions are close to the ground truth. In addition, we have compared our model with the existing lung segmentation approaches. For this purpose, we have evaluated our model using three different protocols to make fair comparisons with the existing approaches. Specifically, we have performed evaluations by splitting each dataset into training and testing set in the 70:30 and 80:20 ratios and using 5-fold cross-validation. As can seen in Table V, our model provides a considerable improvement over the existing methods. Importantly, our model achieves higher recall on both the datasets, which is one of the most important metrics used to evaluate a computational model in medical informatics. 2) Classification: Table VI presents the CXR classification results. Specifically, this table presents the AUC for individual classes. For comparison, we have reported the performance of a set of existing approaches, specifically CRAL [4], CheXGCN [12], Li et al. [43], Wang et al. [3], Xi et al. [29], Li et al. [44], LLAGnet [10] which have also been evaluated on ChestXray14 using the same dataset splits. Our approach achieves state-of-the-art classification performance for 9 out of 14 diseases. The LLAGnet [10] achieves the best classification results for 3 out of the remaining 5 diseases, while Li et al. [44] achieves the highest AUC for Effusion and CheXGCN [12] achieves approximately the same AUC for Emphysema. Importantly, our approach provides a considerable improvement in AUC for most of those 9 diseases. It also achieves the highest mean AUC, which indicates its better overall classification performance. Further, we have compared our approach with LLAGnet [10] in terms of the classification accuracy (ACC c ), recall (REC c ) and specificity (SPE c ). Here the subscript is used to distinguish these classification metrics from the ones used for segmentation. Initially, we have calculated TP, TN, FP, and FP for each class [10] by comparing the predicted scores with the thresholds set for individual classes, as has been done in [10]. The results presented in Table VII indicate that our approach provides consistently better classification performance. Additionally, we have computed precision (PRE c ) and F1-score (F1), which are defined as follows: The superior performance of our dual-branch network can be attributed to its two keys aspects, i.e., its ability to segment lungs more accurately and learn a better representation for classification by combining features from the entire CXR as well as the segmented lung region. This approach effectively learns multiscale features for CXR classification. The proposed model has been trained on CXR images of size 224 × 224 pixels. Some of the previous works [3], [10] have studied the effect of input image size on the classification performance. Using higher resolution input image (e.g. 512 × 512 or 1024 × 1024) has only led to marginal improvement in the overall classification performance. We have not compared our results with [37] as this existing work has not used official dataset splits for performance evaluation, and therefore a fair comparison cannot be made. a) Qualitative analysis: We present results of a qualitative evaluation of our approach and compare the results with LLAGnet [10]. To perform this analysis, we have selected the same set of CXRs that has been used in [10] for qualitative evaluation. The images and their predictions by our approach and LLAGnet are shown in Fig. 6. Specifically, we present top-8 predicted scores for each test sample and highlight the positive classes in red color. As can be seen, the prediction scores generated by our approach for positive classes are significantly higher than the ones generated for negative classes. While these differences can be observed in the case of LLAGnet as well, the margins are considerably lower. We have analysed the performance of our approach on failure cases presented in [10]. The results presented in Fig. 7 clearly indicate that our approach learns more discriminative representations for CXR classification. The predicted score for the positive class is the highest for each of the images except for the third CXR. In this case too, the predicted score for the positive class (consolidation) is quite high and likely to exceed any threshold set appropriately. b) Grad-CAM visualization: We have generated Grad-CAM visualizations to gain a better understanding of our classification network, specifically its global branch. Fig. 8 provides a visual explanation indicating the image regions our network focuses on for its predictions. As can be seen, the localized region in each case corresponds to an abnormality in CXR. For example, consider the case of Cardiomegaly. The enlarged cardiac silhouette can be seen in the CXR, and Grad-CAM localizes this region well. The correspondence between a CXR abnormality and the localized region can also be observed for most of the other disease classes as well. These Grad-CAM visualizations indicate that our dual-branch classification network focuses on discriminative regions for its predictions. c) Ablation study: We have carried out ablation studies to assess the effectiveness of the two branches in the proposed dual-branch network for CXR classification. In this set of experiments, we have trained and evaluated the global and local branch individually on ChestXray14 dataset. The AUC for each class along with the mean AUC are presented in Table VIII. As can be observed, the local branch achieves marginally higher AUC compared with the global branch. Lung nodules, for example, are better diagnosed when the model is trained on segmented lungs. The plausible reason is that the segmentation eliminates noisy regions and other structures such as heart and thoracic spine that are likely to impact the diagnosis of these small lesions. Importantly, the results of this study indicate that both the local and the global branches learn effective representations and that the proposed dual-branch network clearly benefits from the fusion of features extracted by its individual branches. We have also studied the effectiveness of GRU blocks in the proposed dual-branch network. To this end, we have performed two sets of experiments. Firstly, we have trained and evaluated the performance of each branch separately without its GRU block. Secondly, we have trained the network after removing the GRU blocks from both the branches and evaluated the classification performance of the dual-branch network. The results presented in Table VIII clearly indicate that capturing dependencies in feature sequences using GRU leads to significantly improved classification accuracy. d) Failure cases: We have also analysed CXRs that have been misdiagnosed by our approach. Fig. 9 shows a few failure cases. In general, multi-label classification of these CXRs appears to be a non-trivial task due to different reasons, including severe lung conditions. For example, the first CXR is a very low contrast image, due to which the lungs are not clearly visible. In the second image, there appears to be a tube device that may have caused misdiagnosis. In the third image, one of the lungs is not visible, which makes the diagnosis of multiple conditions difficult. Failing to handle these cases appears to be a limitation of our approach. Our extensive evaluation of the proposed models indicates that they advance the state-of-the-art in lung segmentation and multi-label classification of thorax diseases from CXRs. However, on the flip side, our segmentation model has significantly more parameters than the existing ones. Specifically, it has 140 million (M) trainable parameters, while the other segmentation models in Table IV, Mask R-CNN, UNet, DeepLabV3+, and UNet++, have only 64 M, 34 M, 11 M, and 4 M parameters, respectively. On the other hand, our dual-branch classification model is lighter with 0.7 M trainable parameters. The average inference time of our approach is 9.53 seconds. This can be a limiting factor in adopting our models for some real-world applications. V. CONCLUSION AND FUTURE DIRECTIONS In this paper, we have presented a dual-branch network for CXR-based diagnosis of thorax diseases. The proposed framework consists of a segmentation model named R-I UNet and a classification network. The proposed R-I UNet uses U-Net model as its backbone, wherein novel residual-inception blocks replace the convolutional layers. The classification network consists of two branches, namely, the local and the global branches. The local branch extracts features from the segmented lungs, while the global branch extracts features from the input CXR. These feature sequences are processed by two GRU blocks independently and their outputs are concatenated to obtain a single feature vector, which is passed through a dense layer activated by sigmoid function to generate prediction scores. We have employed transfer learning techniques to design our classification network. Our experimental results suggest that the proposed framework can be adopted for a more accurate diagnosis of thorax diseases from CXRs. Our study also indicates that while it is essential to focus on the lung region, contextual features also provide clues and their fusion improves the performance of CXR-based automated disease diagnosis. In the future, we plan to study how well our segmentation and classification models generalize to new domains and explore domain adaptation techniques to enhance their performance. We also plan to redesign our classification model and train it simultaneously for localization of abnormalities in a multitasking framework. A CAD system that can accurately localize abnormalities and classify diseases is expected to provide a more interpretable solution to radiologists.
6,837.6
2022-10-19T00:00:00.000
[ "Computer Science" ]
Optimization of a lumbar interspinous fixation device for the lumbar spine with degenerative disc disease Interspinous spacer devices used in interspinous fixation surgery remove soft tissues in the lumbar spine, such as ligaments and muscles and may cause degenerative diseases in adjacent segments its stiffness is higher than that of the lumbar spine. Therefore, this study aimed to structurally and kinematically optimize a lumbar interspinous fixation device (LIFD) using a full lumbar finite element model that allows for minimally invasive surgery, after which the normal behavior of the lumbar spine is not affected. The proposed healthy and degenerative lumbar spine models reflect the physiological characteristics of the lumbar spine in the human body. The optimum number of spring turns and spring wire diameter in the LIFD were selected as 3 mm and 2 turns, respectively—from a dynamic range of motion (ROM) perspective rather than a structural maximum stress perspective—by applying a 7.5 N∙m extension moment and 500 N follower load to the LIFD-inserted lumbar spine model. As the spring wire diameter in the LIFD increased, the maximum stress generated in the LIFD increased, and the ROM decreased. Further, as the number of spring turns decreased, both the maximum stress and ROM of the LIFD increased. When the optimized LIFD was inserted into a degenerative lumbar spine model with a degenerative disc, the facet joint force of the L3-L4 lumbar segment was reduced by 56%–98% in extension, lateral bending, and axial rotation. These results suggest that the optimized device can strengthen the stability of the lumbar spine that has undergone interspinous fixation surgery and reduce the risk of degenerative diseases at the adjacent lumbar segments. Introduction Lower back pain has various causes, but the degeneration of the intervertebral disc (IVD) is the most common cause of lower back pain [1][2][3]. When the IVD degenerates, spinal canal stenosis occurs, in which the spinal canal, intervertebral foramina, and nerve root canals are narrowed. If sensory abnormalities in the buttocks or lower extremities, progressive neurologic deficit, and bladder or bowel symptoms appear, spinal fusion is performed along with nerve decompression surgery [4][5][6][7][8][9]. Spinal fusion is a surgical procedure that uses an implant to connect the vertebral segment of the surgical site to the adjacent segment. This surgical procedure removes a significant portion of the ligaments or muscles that make up the spine [10]. In addition, by fixing the two lumbar segments to one, the movement of the surgical area is restricted. As a result, the movement of adjacent segments increases, and degenerative diseases accelerate in adjacent segments [10][11][12][13][14][15]. To address the above problems, an interspinous spacer device (ISD) is inserted between the interspinous processes. This increases the height of the lumbar spine segment, which is lowered in patients with spinal stenosis due to the degeneration of the intervertebral disc so that the passage through which the nerve bundle passes is no longer narrowed. The devices developed for this method are X-STOP (Kyphon, Inc., USA), device for intervertebral assisted motion (Medtronic Sofamor Danec, USA), Wallis (Abbott Spine Inc., France), and Coflex (Paradigm Spine LLC, Germany) [16][17][18][19][20][21][22][23]. However, the installation of these devices leads to the removal of soft tissues in the lumbar spine, such as the supraspinous and interspinous ligaments, and an ISD with a higher stiffness than the lumbar spine may cause degenerative diseases in adjacent segments [24,25]. In our previous study, we designed a lumbar interspinous fixation device (LIFD) that addressed the limitations of the ISD and performed a finite element analysis to verify the fundamental performance of the LIFD, in terms of structural and dynamical stabilities. Although the LIFD can be used for the lumbar spine with intervertebral disc diseases, only the L3-L4 lumbar segment was used in this study, and the safety of the interspinous process inserted with the LIFD was not examined by calculating the facet joint force (FJF) in all lumbar segments [26]. Therefore, this study aims to structurally and kinematically optimize an LIFD using a full lumbar finite element model. The optimized LIFD guarantees the structural safety of the spinous processes by FJFs and the kinematically normal behaviors for a range of motions (ROMs) in all lumbar spine segments. This was evaluated after inserting the optimized LIFD into the finite element model of the lumbar spine with the intervertebral disc disease model. Moreover, after installation, the LIFD minimizes degenerative diseases in the adjacent lumbar segments by allowing the lumbar spine to exhibit normal behavior; this can be realized because the interspinous and supraspinous ligaments are not removed during the surgical procedure when the optimized LIFD is employed. The proposed finite element models of the healthy lumbar spine and lumbar spine with degenerative disc disease were also validated via a comparison with the analytical and experimental results of previous studies. Finite element model of the healthy lumbar spine Lumbar spine data from a human anatomy model (Viewpoint Datalabs, USA) were used to generate the shape of the L1-L5 lumbar spine (Fig 1). The L1-L5 lumbar spine model comprised bone, soft tissues, cartilage, discs, and ligaments. Lumbar vertebrae were created with the cancellous bone, cortical bone, posterior element, and endplate. The cortical bone and endplate were modeled with a thickness of 1 mm [27][28][29][30][31][32]. The cancellous bone, cortical bone, and endplate were generated with an eight-node hexahedral element (C3D8) and a posterior element with a four-node tetrahedral element (C3D4). The material properties were assumed to be linearly elastic. The facet joint was attached to the upper and lower posterior elements, and the initial upper and lower gaps of the facet joint were modeled as 0.5 mm. For the facet joint, a four-node tetrahedral element (C3D4) with linear elastic properties was employed, and frictionless contact was assumed between the upper and lower facet joints [33,34]. The intervertebral discs between the lumbar spine segments were created with nucleus pulposus, ground substance and annulus fibrosis. The ground substance was modeled as a sixlayer, and annulus fibrosis was modeled to surround each layer of ground substance. An eightnode hexahedral element (C3D8H) with a Mooney-Rivlin hyperelastic material property was used as the ground substance. The fluid cavity was formed using a four-node tetrahedral fluid element (SFM3D4) to simulate the fluid behavior of the nucleus pulposus. The annulus fibrosis was modeled as a two-node truss element (T3D2) that resists only tension, and its stiffness increases as the distance from the nucleus pulposus increases [35][36][37]. Validation of the finite element model of the healthy lumbar spine The healthy lumbar spine finite element model was validated by comparing the ranges of the flexion-extension, lateral bending, axial rotation motion, nucleus pulposus pressure, and FJF available in the literature [34,[41][42][43][44][45][46][47][48][49][50][51][52][53][54][55]. First, the healthy lumbar spine finite element model was verified with a pure moment and pure follower load. The ROM and FJF were calculated by applying a pure moment of 7.5 N�m to the upper surface of the L1 lumbar vertebra in the directions of flexion, extension, lateral bending, and axial rotation [55]. Next, a follower load of 1000 N was applied to measure the nucleus pulposus pressure of the L4-L5 intervertebral disc [55]. The lower surface of the L5 lumbar vertebra was fixed so that no displacement could be generated in any direction in any of the finite element analyses. Second, a combination of moment and follower load was applied, and the ROM, nucleus pulposus pressure, and FJF were measured. These loading conditions were referenced from previous studies that measured in vivo experiments ( Finite element model of the lumbar spine with degenerative disc disease The lumbar finite element model with the degenerative disc was developed using the healthy lumbar spine finite element model with reference to previous studies, in which degenerative disc disease caused changes in the disc height. A degenerative disc was assumed to be present between the L3-L4 lumbar vertebrae. In addition, the changes in the material properties and shape of the degenerative disc were referenced in the literature [1,38]. In this study, compared with a healthy disc, the degenerative disc for the LIFD insertion was assumed to have a 60% reduction in height (Fig 1). The material properties of the nucleus pulposus were applied by referring to the results of previous studies showing that when the height of the disc was reduced by 60%, the compressibility of the healthy nucleus pulposus increased from 0.0005 mm 2 /N to 0.0995 mm 2 /N [38,48]. This reduction in disc height caused the anterior, posterior, transverse, flavum, capsular ligaments, and annulus fibrosis to buckle; however, the interspinous and supraspinal ligaments were prestressed without buckling. Therefore, by offsetting the nonlinear stress-strain curve, prestress was applied to the interspinous and supraspinal ligaments, whereas the buckled ligaments and annulus fibrosis were made to work according to the original nonlinear stress-strain curve when they achieved their original lengths [38]. Validation of the finite element model of the lumbar spine with the degenerative disc disease The validation of the lumbar finite element model with the degenerative disc was based on the results of previous studies, which verified the finite element model only in the L3-L4 lumbar region. Therefore, we used only the L3-L4 lumbar region for comparison with the results of the present study. The lower surface of the L4 lumbar vertebra was fixed, and a pure moment of 10 N�m was applied to the upper surface of the L3 lumbar vertebra in the directions of flexion, extension, lateral bending, and axial rotation. The finite element model of this study was verified by comparing the ROM with previous findings [59]. Lateral bending 700 7.8 [57] Axial rotation 720 5.5 [58] Optimum design and validation of the LIFD The LIFD spring component designed in our previous study was optimized using the Taguchi method (Fig 2). The number of turns of the active coils of the spring and the wire diameter of the spring were selected as design variables, and the spring stiffness was fixed at 20 kN based on our previous results [26]. The number of turns of the active coils of the spring and the wire diameter of the spring were calculated as follows: where G denotes the shear modulus, d is the wire diameter of the spring, D is the mean diameter of the spring, and N is the number of turns of the active coils of the spring. An L9 (23) orthogonal array developed by Taguchi was adopted for the optimum design of the LIFD. The L9 array has two variables, each with three levels; hence, nine finite element analyses were conducted to optimize the design variables (Table 3). Next, the LIFD was inserted into the L1-L5 model with the degenerative disc in the L3-L4 lumbar vertebrae to create the optimal design. To simulate the extension of the LIFD itself and the LIFD-inserted spinous process, the lower surface of the L5 lumbar vertebra was fixed such that no displacement could occur in any direction. A moment of 7.5 N�m in the extension direction and a follower load of 500 N were applied to the upper surface of the L1 lumbar vertebra. As described above, after selecting the optimal values of the design variables of the LIFD in the extension motion, this optimally designed LIFD was inserted into the lumbar spine model with the degenerative intervertebral disc. Next, the loading conditions listed in Table 2 were applied. The performance of the optimally designed LIFD was examined in terms of the ROM and FJF of both the healthy lumbar spine and lumbar spine, with the degenerative intervertebral disc in each behavior. Validation of the finite element model of the healthy lumbar spine The ROM of the healthy lumbar spine for the entire L1 to L5 segments (i.e., L1-L5) during flexion, extension, lateral bending, and axial rotation under a pure moment was compared with the results of previous studies. The ROM was 20. The nucleus pulposus pressure of the intervertebral disc between the L4-L5 under a pure follower load was 1.01 MPa, consistent with previous findings of finite element analysis and in vitro experiments [Fig 3(D)]. Next, the ROM between the two healthy lumbar spine segments was measured under moment and follower loads. The ROMs between the L1-L2, L2-L3, L3-L4, and L4-L5 lumbar (Fig 6). The ROMs and nucleus pulposus pressures between the two lumbar spine segments were consistent with previous findings of finite element analysis and in vivo experiments. Validation of the finite element model of the lumbar spine with the degenerative disc disease The ROMs of the healthy L3-L4 lumbar spine segment model with a healthy disc model and degenerative L3-L4 lumbar spine segment model with degenerative disc model during flexion, extension, lateral bending, and axial rotation under a pure moment of 10 N�m were compared with the previous findings of experimental studies and finite element analyses. The ROMs between the healthy L3-L4 was 6.2˚, 2.5˚, 6.65˚, and 2.74˚during flexion, extension, lateral bending, and axial rotation, respectively. The ROMs between the degenerative L3-L4 was 5.38˚, 3.45˚, 2.7˚, and 3.36˚during flexion, extension, lateral bending, and axial rotation, respectively (Fig 7). These ROMs were consistent with previous in vitro experimental results in flexion, extension, lateral bending, and axial rotation at a moment of 10 N�m. Optimum design and validation of the LIFD The maximum stresses of the LIFD were calculated according to changes in the number of turns of the LIFD spring and the wire diameter during extension under a 7.5 N�m moment and 500 N follower load. The ROMs and FJFs were calculated between the healthy L3-L4 lumbar spine model and the degenerative L3-L4 model as well as the degenerative L3-L4 lumbar spine with the LIFD inserted. When the spring wire diameter was 3 mm, the maximum stresses of the LIFD in extension were 191 MPa, 195.7 MPa, and 261.5 MPa with the number of spring turns at 2, 2.5, and 3, respectively. Using the same spring design variable values as above, the ROMs between the LIFD-inserted L3-L4 lumbar spine model were 0.85˚, 0.77˚, and 0.61˚, respectively. When the spring wire diameter was 3.5 mm, the maximum stresses of the LIFD in extension were 153.2 MPa, 172.5 MPa, and 251.6 MPa, for the number of spring turns of 2, 2.5, and 3, respectively. In addition, for the same spring design variable values as above, the ROMs were 0.82˚, 0.73˚, and 0.55˚, respectively. Finally, when the spring wire diameter was 4 mm, the maximum stresses of the LIFD in extension were 148.9 MPa, 163.7 MPa, and 223 MPa, for the number of spring turns of 2, 2.5, and 3, respectively. For the same spring design parameter values as above, the ROMs were 0.77˚, 0.71˚, and 0.34˚, respectively. The maximum stresses of the LIFD and ROMs of the LIFD-inserted L3-L4 lumbar spine decreased with increasing spring wire diameter, and the number of spring turns of the LIFD (Figs 8 and 9). To analyze the performance of the optimally designed LIFD, both the ROMs and FJFs of the healthy L3-L4, degenerative L3-L4, and LIFD-inserted L3-L4 were computed in flexion, extension, lateral bending, and axial rotation motion under the loading conditions listed in Table 2. The ROMs of the healthy L3-L4 in flexion, extension, lateral bending, and axial rotation motion was 5.46˚, 2.86˚, 6.04˚, and 1.3˚, respectively. The ROMs of the degenerative L3-L4 was 4.02˚, 2.53˚, 4.05˚, and 1.68˚, respectively. The FJFs of healthy L3-L4 were 24.6 N, 14.3 N, and 66 N in extension, lateral bending, and axial rotation motion, respectively. The FJFs of the degenerative L3-L4 were 82.5 N, 15.1 N, and 101.8 N, respectively. The ROMs of the LIFD-inserted L3-L4 was 2.65˚, 0.85˚, 4.92˚, and 1.28˚in flexion, extension, lateral bending, and axial rotation motion, respectively. The FJFs of the LIFD-inserted L3-L4 were 1.14 N, 6.5 N, and 45.3 N in extension, lateral bending, and axial rotation motion (Fig 10). Based on the above results, the optimal design variable values were selected by considering the maximum stress and ROM values. As the maximum stress was minimized, the values of the spring wire diameter and number of spring turns were 4 mm and 2, respectively. However, as the ROM was maximized, the values of the spring wire diameter and number of spring turns were 3 mm and 2, respectively. When the ROM is restricted, compensatory movements may occur in the lumbar segment above or below the operated lumbar segment, leading to degenerative disease of the lumbar disc above or below the operated lumbar segment. Therefore, we chose a spring wire with a diameter of 3 mm and two turns, focusing on the ROM of the lumbar spine rather than the maximum stress of the LIFD. Using these design variables, the maximum stress computed in the LIFD under 7.5 N�m moment and 500 N follower loads was 191 MPa. This maximum stress value corresponded to 20.1% of the LIFD material yield strength (i.e., 950 MPa) and resulted in a safety factor of 5 [60]. Therefore, the spring wire with a diameter of 3 mm and two turns in the LIFD satisfy both the mechanical stability and maximum ROM value. Discussion In the current study, a finite element model of a healthy lumbar spine was developed based on the human anatomy model and verified via a comparison with existing in vitro and in vivo PLOS ONE experimental data and finite element analysis results. A finite element model of a degenerative lumbar spine was also created with a degenerative intervertebral disc model degenerated at various levels and verified via a comparison with the results of previous studies. After inserting the LIFD into the L3-L4 lumbar spine segments with the degenerative disc, the maximum stress at the interspinous process and the LIFD, ROM, FJF, and nucleus pulposus pressure were computed in the extension motion. These models were used to design the LIFD optimally. Finally, the optimally designed LIFD was inserted into the degenerative lumbar spine model with a degenerative disc, and then the ROM and FJFs between the LIFD-inserted L3-L4 in flexion, extension, lateral bending, and axial rotation were measured to validate the performance of the LIFD. Validation of the finite element model of the lumbar spine The ROMs of the healthy lumbar spine model in flexion, extension, lateral bending, and axial rotation motion was consistent with previous in vitro experimental findings [Fig 3(A) and 3 (B)]. The FJFs for the extension and axial rotation motions were within the range of the previous in vitro experimental findings [Fig 3(C)]. The FJF values varied between different finite element models constructed by different researchers depending on the shape modeling of the facet joint cartilage and the applied friction conditions. The pressure of the nucleus pulposus of the intervertebral disc was also within the range of previous findings obtained with other lumbar finite element models but was approximately 14% to 25% smaller or greater than the experimental values in vivo. However, considering that the in vivo test results were measured in a single subject under maximal voluntary motion, a difference of approximately 25% between the experimental results and the analytical results can be evaluated as a reasonable result [54]. In this study, both the FJF and nucleus pulposus pressure values of the L4-L5 intervertebral disc were also consistent with the range of in vitro experimental results [Fig 3(D)]. For both the healthy and degenerative L3-L4 lumbar spine models, the ROMs calculated in the directions of flexion-extension, lateral bending, and axial rotation under a 10 N�m moment were also consistent with previous in vitro experimental findings. In this study, the interspinous and supraspinous ligaments were prestressed owing to the loss of intervertebral disc height when modeled as degenerative discs. Therefore, the stiffness of the degenerative lumbar spine to resist flexion-extension and lateral bending motion increased, and hence, the ROM of the degenerative L3-L4 lumbar spine decreased. Moreover, consistent with the literature findings, all ligaments except the interspinous and supraspinous ligaments were buckled in the degenerative L3-L4, resulting in an increase in the ROM of axial rotation compared to the healthy L3-L4 [38,59]. These results suggest that the finite element model of the lumbar spine constructed in this study is suitable for use in biomechanical analysis because it reflects the physiological characteristics of the human lumbar spine. Optimum design and validation of the LIFD The maximum stress calculated in the spinous process in the extension motion increased with an increasing spring wire diameter and an increasing number of spring turns in the LIFD, whereas the ROM of the L3-L4 lumbar spine decreased. As the spring wire diameter and number of spring turns increased, the structural stiffness of the spring increased in the extension motion, reducing the deformation of the LIFD. In the extension motion, the ROM of-L3-L4 tended to increase when the gap between the interspinous processes decreased, while the ROM tended to decrease when the LIFD deformation decreased. The maximum stress in the LIFD increased with a decreasing spring wire diameter and an increasing number of spring turns. This is because an increase in the spring wire diameter results in the distribution of the LIFD load, and a decrease in the number of spring turns results in a decrease in the spring stiffness, and hence, an increase in the spring deformation. This result also indicates that a decrease in the number of spring turns decreases the load that prevents the L3-L4 lumbar spine movement in the extension and decreases the maximum stress produced in the LIFD. The LIFD was inserted into the degenerative L3-L4 lumbar segment to verify the performance of the optimally designed LIFD. Comparing the ROM and FJF before and after insertion, the ROM decreased by 22% in flexion and 66% in extension, while the FJF decreased by 98.6% in extension under the same conditions. The significant decrease in FJF and ROM in the flexion and extension motions indicates that the spring structure of the LIFD has a significant influence on tension and compression. Moreover, the result of a 1.54% decrease in ROM after LIFD insertion in the axial rotation motion compared with healthy L3-L4 lumbar segments also indicates that unstable behavior can be recovered in the axial rotation motion. The maximum stress (~24.5 MPa) of the spinous process for the LIFD-inserted L3-L4 lumbar spine under a 7.5 N�m moment and 500 N follower load, which is significantly lower than the bone fracture stress (~213 MPa) reported in the literature [61][62][63], demonstrates the mechanical integrity of the current LIFD design. Although a decrease in ROM is expected after LIFD insertion surgery, this has been observed in previous LIFD studies in the literature reporting pain reduction in the intervertebral disc and posterior joints [20,21]. Therefore, the insertion of the currently designed LIFD in a patient with an intervertebral disease can significantly reduce FJF and intervertebral pain. In previous studies, the performance of the interspinous devices was evaluated using a finite element analysis by inserting the devices into a healthy lumbar finite element model. However, when a healthy lumbar finite element model is used, the physical properties and deformation information of degenerative discs are not used. For this reason, changes in the ROM and force of the facet joint due to degenerative intervertebral discs cannot be reflected. Therefore, the effectiveness of the interspinous device cannot be confirmed in degenerative intervertebral discs. In this study, a design of LIFD was proposed by selecting the optimal values of the spring wire diameter as well as the number of spring turns for the spring in the LIFD. The performance of the LIFD was confirmed using the lumbar finite element model with the degenerative disc model implemented in this study. However, the limitations of the current study are as follows. The information on degenerative disc using a finite element analysis was reflected, but lesions such as osteopenia, osteoporosis, and lumbar spine spondylolisthesis were not reflected in patients requiring surgery. Bone fracture stress in patients with osteopenia or osteoporosis is lower than the reported bone fracture stress; therefore, studies that reflect this are required. In addition, the performance of the current LIFD was evaluated only through finite element analysis and not through rig tests using material testing machines. Conclusions In this paper, we proposed a design for LIFD by selecting the optimal values of the spring wire diameter; and the number of spring turns for the spring in the LIFD from a dynamic perspective. The result that the FJF of the LIFD-inserted lumbar spine with the degenerative disc decreased by approximately 55% to 98% in extension, lateral bending, and axial rotation suggests that the optimally designed LIFD in this study can reduce pain caused by posterior joint lesions. In future studies, fatigue tests and analyses should be performed with the LIFD to validate the durability of the LIFD in the human body and investigate changes in performance and safety after long-term use.
5,744.2
2022-04-07T00:00:00.000
[ "Engineering", "Medicine" ]
The Near-Horizon Geometry of Supersymmetric Rotating AdS$_4$ Black Holes in M-theory We classify the necessary and sufficient conditions to obtain the near-horizon geometry of extremal supersymmetric rotating black holes embedded in 11d supergravity. Such rotating black holes admit an AdS$_2$ near-horizon geometry which is fibered by the transverse spacetime directions. Despite their clear interest to understanding the entropy of rotating black holes, these solutions have evaded all previous supersymmetric classification programs due to the non-trivial fibration structure. In this paper we allow for the most general fibration over AdS$_2$ with a flux configuration permitting rotating M2-branes. Using G-structure techniques we rewrite the conditions for supersymmetry in terms of differential equations on an eight-dimensional balanced space. The 9d compact internal space is a U$(1)$-fibration over this 8d base. The geometry is constrained by a master equation reminiscent of the one found in the non-rotating case. We give a Lagrangian from which the equations of motion may be derived, and show how the asymptotically AdS$_4$ electrically charged Kerr-Newman black hole in 4d $\mathcal{N}=2$ supergravity is embedded in the classification. In addition, we present the conditions for the near-horizon geometry of rotating black strings in Type IIB by using dualities with the 11d setup. Introduction The idea of extremization principles playing a fundamental role in physics has a long history since the advent of the Lagrangian and the principle of least action. More recently extremal problems have also been shown to play a role in both quantum field theory and supergravity. On the field theory side a-maximization [1], F -maximization [2], c-extremization [3,4] and I-extremization [5] have been successfully used to compute observables in SCFTs in 4, 3, 2 and 1 dimension(s) respectively. Via AdS/CFT it is natural to conjecture that there are dual extremization principles on the gravity side. Indeed such geometric extremization principles have been found for all of the field theory principles mentioned above. In [6,7] a geometric dual to a-maximization and F -maximization was given whilst in [8] an analogous proposal for c-extremization and I-extremization was given for certain classes of theories. The classes of solutions tackled in [8] and in the later works [9][10][11][12][13][14][15] are AdS 3 solutions in Type IIB and AdS 2 solutions in 11d supergravity. Subclasses of these arise as the near-horizon of static black strings and black holes embedded in the respective theories. 1 For example, the near-horizon limit of a static asymptotically AdS 4 extremal black hole in 4d gauged supergravity contains an AdS 2 factor, see the review [17] and references therein. The staticity of the black hole requires that the transverse directions of the geometry are not fibered over AdS 2 but merely form a warped product. If one further restricts to magnetically charged black holes and uplifts the near-horizon solution to 11d supergravity, one obtains a supersymmetric solution with an AdS 2 factor and electric four-form charge. Solutions of this form were classified in [18] and later extended in [19,20] to include additional magnetic flux. The geometries are a warped product of AdS 2 with a nine-dimensional internal manifold which is locally a U(1) bundle over a conformally Kähler space. To construct these geometries one places M2-branes in an asymptotic geometry of R × CY 5 and wraps them on a curve inside the Calabi-Yau five-fold. The near-horizon of this setup then gives rise to the AdS 2 geometry which in turn is seen to be the near-horizon of a black hole. In order to obtain an AdS 2 solution it was important that the 4d black hole was both static and only magnetically charged. Adding rotation to the four-dimensional black hole leads to the internal space being fibered over the AdS 2 in the near-horizon, which will clearly persist in the uplift. Though not as obvious, if the 4d black hole has electric charges which are identified as arising from gauged flavour symmetries, this will also lead to a fibered AdS 2 in the 11d uplift. A gauge field in the truncation can have two sources, either it comes from gauging an isometry of the compactification manifold, or from the expansion of a p-form potential on (p − 1)-cycles of the compactification manifold. The former gauge fields are dual to flavour symmetries whilst the latter are dual to baryonic symmetries. For the flavour symmetries the uplift will lead to the isometries being fibered over AdS 2 in the 11d solution. In summary, in order to incorporate more general black holes which rotate and have electric charges, one must relax the product structure of the 11d solution and allow for the internal manifold to be fibered over AdS 2 . In contrast, one of the essential ingredients used in the works [18,20], and more generally in AdS classifications, is that the AdS factor is a direct product in the metric. In this paper we will lay the groundwork for extending the geometric dual of I-extremization and c-extremization to theories arising from the near-horizon of rotating black holes and black strings respectively. Concretely we will classify all supersymmetric solutions of 11d supergravity containing an internal manifold arbitrarily fibered over AdS 2 . With such a general ansatz we cover the black holes considered in [21][22][23][24][25]. To the best of our knowledge this is the first time in the literature this has been performed. 2 We find that the 9d internal manifold is a U(1) fibration over an 8d space admitting a balanced metric. The balanced metric satisfies a master equation which is the analogue of the one found in the non-rotating case [18,27], see also [19,[28][29][30] for further generalizations of these master equations. Through dualities we also classify a class of rotating black string near-horizons in Type IIB. The outline of this paper is as follows. In section 2 we study the necessary and sufficient conditions for a supersymmetric solution with time fibered over the transverse directions and consistent with preserving an SO(2, 1) symmetry. In section 3 we give an action from which the equations of motion found in section 2 may be derived. In particular we show that when supersymmetry is imposed on the action it reduces to a simple form which computes the entropy of the black hole/string. By way of exposition we show in section 4 how the electrically charged AdS 4 Kerr-Newman black hole is embedded in the classification. Section 5 discusses the conditions on the geometry of rotating black strings in Type IIB by using dualities with the 11d geometry. We conclude in section 6. A discussion on general black hole near-horizons and computing observables of the solutions is presented in appendix A. Setup In this section we will explain the general procedure for obtaining the conditions for preserving supersymmetry of near-horizon solutions of rotating black holes. In general the conditions we find are necessary and sufficient conditions that must be satisfied by the near-horizon of any rotating black hole in 11d supergravity arising from rotating M2-branes. We will determine these conditions by using 2 There is the nice paper [26] where the embedding of static AdS4 black holes in 11d supergravity were considered. the results in [31] which classified all 11d supergravity backgrounds preserving supersymmetry and admitting a timelike Killing vector. Using [31] we can reduce the 11d supersymmetry conditions into differential conditions on a 10d base space. This base space must be non-compact and upon imposing the natural condition that the 10d space is a cone we can reduce the conditions further to a compact 9d base, Y 9 . This 9d base is a U(1) fibration over an 8d base, B. In general the 8d base is not conformally Kähler, which is true for the non-rotating AdS 2 case studied in [18], but instead is a conformally balanced space. One of the guiding principles that we will use is to impose that the near-horizon solution possesses an SO(2, 1) symmetry dual to the conformal group in the 1d superconformal quantum mechanical theory. Generally the ansatz that we will use when reducing the supersymmetry conditions does not possess this full symmetry but only a subset of it. However, from the point of view of imposing supersymmetry it is more convenient to work with this more general setup and then further constrain the geometry to preserve the full conformal group later. We will find that the additional constraints that we need to impose for the existence of an SO(2, 1) symmetry are specified by giving a constant vector with entries corresponding to each of the Killing vectors of the metric. These constants are related to the near-horizon angular velocities of the black hole along the Killing directions. We begin this section by reviewing the conditions for a supersymmetric geometry in 11d supergravity to admit a timelike Killing vector following [31]. We discuss in detail the ansatz we will use in performing the reduction and subsequently reduce the conditions to an 8d base space. Up until this point we have not imposed the existence of an SO(2, 1) symmetry and in the final part of this section we discuss the additional constraints one must impose for such a symmetry using the results in appendix A. Timelike structures in 11d supergravity In [31] the conditions for a solution of 11d supergravity to admit a timelike Killing spinor were derived. Here we summarize the most important results for our purposes. The metric takes the general form where ∆ and e 2φ are functions defined on the 10d base. Note that we use a rescaling e 2φ of the 10d metric compared to [31]. The 10d base admits a canonical SU(5) structure which we denote by (j, ω) 3 . We normalize this structure such that The exterior derivatives of the structure forms satisfy Here the w i are the torsion modules of the SU(5) structure: w 1 is a real (2, 0)+(0, 2)-form, w 2 a real primitive (3, 1)+(1, 3)-form, w 3 a real primitive (2, 1)+(1, 2)-form and w 4 and w 5 are real one-forms. The 11d four-form flux is decomposed into 10d fluxes as Following the results of [31], imposing supersymmetry yields the following conditions relating the fluxes to the structure forms Moreover it follows that the 11d flux takes the form where da decomposes as da = da (0) j + da (1,1) 0 + da (2,0) + da (0,2) , and h (2,2) 0 is the primitive (2,2) part of h 4 and is unconstrained by supersymmetry. Additionally the torsion module w 5 is fixed by supersymmetry to be For a supersymmetric solution to exist these conditions must be supplemented by the Bianchi identity and Maxwell equation The set of equations as given above are both necessary and sufficient for a solution to admit a timelike Killing spinor. Our main motivation is to obtain the near-horizon geometries of rotating M2-branes wrapped on Riemann surfaces, which may give rise to the near-horizon of rotating black holes. We must therefore make some assumptions about the form of the solution. To engineer such solutions one should place the rotating M2-branes in an asymptotic geometry of the form R t CY 5 and then wrap the M2-brane on a Riemann surface inside the Calabi-Yau five-fold. Note that the rotation of the M2-brane leads to the non-trivial fibration of the 11d space-time, with the time direction fibered over the five-fold. Since the asymptotic geometry is Calabi-Yau it is natural to expect that our 10d base space is complex, which requires that w 1 = w 2 = 0. This is indeed how the rotating M2-brane solution is embedded in the classification of [31]. We will be satisfied with using the complex condition as a well-motivated ansatz in the following though it would certainly be interesting to lift this restriction. In addition to requiring the complex condition we also want to eliminate the possibility of having flux sourcing M5-branes. For this reason we will remove any terms appearing in the flux which are of Hodge type (4, 0)+(0, 4), since these would not come from to M2-branes wrapped on a Riemann surface. 4 From (2.8) and (2.9) we see that this assumption implies w 4 = 3 d log ∆ − 8 dφ. Under these assumptions the 10d torsion conditions are (2.12) The last unspecified torsion module is given by the primitive part of the three-form flux: w 3 = e −2φ f 3,0 . The 11d flux can now be succinctly written as where we define the shifted four-form flux 14) The Bianchi identity (2.10) and Maxwell equation (2.11) can now be rewritten in terms ofh (2,2) as 10 d(e 2φ j) = 1 2h (2,2) ∧h (2,2) . Ansatz To proceed we must now insert an ansatz for the 10d base space. It was shown in [31] that the base is necessarily non-compact (the argument uses some smoothness conditions but these should hold in the present setting), and so we impose that the base is conformally a cone. The metric we take is Next we need to specify how the scalar fields ∆, φ, connection one-form a and fluxes scale with respect to the radial coordinate. Ultimately we want to be able to recover a warped AdS 2 factor and an r-independent 9d space. This fixes the scaling of ∆ and φ to be where we have introduced two new scalars B and C which are independent of the radial coordinate. For general scalar C this will not lead to a geometry admitting an SO(2, 1) isometry generating the conformal group in 1d. As discussed earlier one must impose additional constraints. Rather than imposing them now it is more convenient to impose them later and leave the scalar C unconstrained for the moment. The conical geometry naturally gives rise to an R-symmetry vector ξ defined by As can be easily checked by explicit computation the norm squared of the vector is r 2 . On the link of the cone at r = 1 this translates to the existence of a unit-norm vector generating a holomorphic foliation over an 8d base admitting an SU(4) structure inherited from the parent SU(5) structure. We denote this 8d base by B. Introducing coordinates for this vector we can write the dual one-form as η = dz + P , (2.23) where P is a one-form on B. We may now decompose the SU(5) structure (j, ω) in terms of the SU (4) structure, which we denote by (J, Ω), as (2.24) Here we include a scaling e −3B−C/3 of the 8d base, and a phase along the z-direction. The choice of scaling has been chosen so that the two form is balanced rather than conformally balanced as will become clear in the following section. While the phase is required by supersymmetry and implies that the holomorphic volume form has unit charge under the vector ξ. The scaling of the connection one-form appearing in the time-fibration is fixed to be a = r(α η + A) , (2.25) where α and A denote an 8d scalar and one-form respectively. Note that we did not include a term with a leg on dr in this decomposition because such a term could be absorbed by redefinitions and coordinate changes for a near-horizon geometry. It will turn out that imposing the SO(2, 1) symmetry will further constrain the one-form a and scalar C however we postpone this discussion to later. The field strength da is da = α dr ∧ η + dr ∧ A − r η ∧ dα + r (α dη + dA) . (2.26) With these ansätze the 11d metric becomes We recover the non-rotating case by setting α = 0, A = 0 and e 2C = 1. 5 Finally we must fix the r-scaling of the flux. The scaling is fixed by regularity as r → 0 and preserving the SO(2, 1) symmetry which requires the radial dependence to only appear in the oneforms dt r and dr r . (2.28) It follows that the 10d fluxes f 3 andh (2,2) decompose in terms of 8d fluxes as In principle one could include a piece of f 3 with one leg on η and two legs on B, but we omit it here because it will be put to zero by supersymmetry. Note that we keep track of the Hodge type of the components ofh (2,2) , where the holomorphic and anti-holomorphic one-form associated with dr and η are given by e 1 = dr − irη and its conjugate respectively. 8d supersymmetry conditions We can now derive the 8d conditions by reducing their 10d counterparts using the ansätze presented in the previous section. Let us begin by reducing the SU(5) structure torsion conditions to SU(4) structure conditions. From decomposing (2.12) we find Recall that Ω has unit charge under the vector ∂ z which is evident from (2.24). From these equations we can deduce the SU(4) torsion modules W i . From (2.37) we immediately see that the 8d base is complex: W 1 = W 2 = 0. Furthermore, from (2.34) we see that W 4 = 0, i.e. the base is balanced. Fixing the two-form to be balanced as opposed to conformally balanced fixed the choice of scaling of the 8d base in (2.24). In particular the base is not Kähler: the third torsion module is related to the primitive part of F 3 as W 3 = e −2C/3 F 3,0 . However, for the Kerr-Newman electrically charged black hole that we consider in section 4 this part of the flux vanishes, and the 8d base is therefore Kähler. From (2.37) we find W 5 = −4J · P − 4 3 dC and this fixes the Ricci-form of the base in terms of the connection P and the scalar C as we show below. Before proceeding it is useful to rewrite the three-form flux f 3 as which puts it into a form more reminiscent of the non-rotating case [18]. In fact, we will take ξ to be a symmetry of each of the scalars B, C individually, though supersymmetry does not require this. This assumption is natural since we want ξ to play the role of the R-symmetry vector of the solution. Note that these conditions imply that it is a Killing vector of the 10d space and by imposing L ξ α = 0, it is in fact a Killing vector for the full 11d metric. Taking the exterior derivative of (2.37) implies Note that the second term is exact since we require the scalar C to be globally well-defined. This in turn allows us to compute the Chern-Ricci scalar 6 The Chern-Ricci scalar is related to the more common 8d Ricci scalar via 7 It is clear from the above relation that the two scalars coincide when the manifold is Kähler. (2.45) 6 Here we find the d'Alembertian operator through the short computation: where we use that 1 3! dJ 3 = d * J = 0. 7 Note that this is equivalent to the identity R8 = R C − 1 2 |d c J| 2 that is also used in the literature. From this decomposition it is simple to show that the R-symmetry vector ξ is not just a symmetry of the metric, but also for the 10d fluxh (2,2) , i.e. In fact, we find that ξ is a symmetry for the full 11d flux G 4 as well, since by using (2.26) and that the scalar α has vanishing Lie-derivative along ξ one can show that This is then consistent with our interpretation of ξ as being the Killing vector dual to the R-symmetry of a putative dual field theory. From the 10d Maxwell equation we find the set of equations This is the rotating version of the master equation [18,27]. It reduces to the familiar non-rotating master equation of [18] by setting e 2C = 1, H (2,2) = 0 and dJ = 0 (so that R C = R 8 ). One can be slightly more explicit with the form of the flux terms and determine them up to primitive pieces. From (2.26) and by decomposing da in term of its Hodge type we find We can use these decompositions to reduce (2.18) which implies: (2.54) Therefore we may rewrite the fluxes as denotes the primitive piece. In principle one could now substitute these expressions into the Bianchi identities and Maxwell equations however this is not particularly enlightening and so we refrain from presenting them here. Note that the primitive pieces are essential for satisfying the Bianchi identities. Imposing the SO(2,1) isometry So far our analysis has been for general scalars C, α and one-form A. However, in order to construct the near-horizon of a black hole we need to impose that there is an SO(2, 1) isometry, which leads to constraints on these fields. In appendix A we have given the general metric for the near-horizon of a rotating black hole with a manifest AdS 2 factor over which the internal manifold is fibered and seen the constraints that this imposes on the geometry. In particular the fibration is governed by a vector of constants k i associated to each Killing vector of the internal manifold fibered over AdS 2 . As we reviewed in the appendix the necessity for these parameters to be constant arises in order that there is an SO(2, 1) isometry. From the analysis of appendix A we find that the scalar and one-form take where ∂ φ i are the Killing vectors of the internal manifold and the metric g ij is the metric on ds 2 9 , as defined in (2.19), restricted to the angular coordinates. Denoting by the dual one-form of the Killing vector ∂ φ i using the metric on ds 2 9 . Then the one-form a is simply In the remainder of this section let us assume that the 8d base is Kähler since this will allow for more explicit expressions. In addition we will assume that the base is toric, with the 9d space Y 9 admitting a U(1) 5 action with Killing vectors ∂ φ i . 9 We may write the one form η as 10 where the w i are the moment map coordinates of the cone restricted to Y 9 . Moreover the Kähler two-form on the base may be expanded as where x i are global functions on Y 9 since b 1 (Y 9 ) = 0 for a toric contact structure. Note that With this short (and very incomplete) review of toric geometry we may proceed with writing the scalars and one-form in terms of the global functions of the toric geometry defined above. It follows Next consider A, we find the simple result Finally we may evaluate (2.56) which implies where | · | 2 8 is the norm with respect to the Kähler metric. In principle one could try to solve this for the scalar C, however this is a sextic equation to solve. One could use (2.64) as defining the combination e −3B−C/3 which appears ubiquitously in the geometry. Note that this last comment only applies when the gauge field A is non-zero. When it vanishes and the fibration is only along the R-symmetry direction, it turns out that C is constant. To see this it is more insightful to use the parametrization employed in appendix A where the z-coordinate 9 We need not require the full space to be toric for our arguments to hold, we merely do so for simplicity of exposition. An interesting case to consider, which requires a minor generalization, is to consider a Riemann surface embedded into Y9 as with n a four-vector of constant twist parameters which are the Chern numbers of the U(1) bundle over the Riemann surface [9]. 10 We follow the toric geometry notational conventions of [9]. is assigned its own constant k z , i.e. we do not use the basis ∂ φ i used previously in this section. In this basis the Killing vectors are the four U(1) isometries of the base and the R-symmetry vector ∂ z . It is then clear that for A to vanish each of the four constants associated to the U(1)'s of the base must be zero. It follows from (A.15) that α is precisely the constant −k z . Moreover e −2C takes the constant value, The natural interpretation of this subcase is that of the near-horizon of a non-rotating black hole equipped with an electric component for the graviphoton and possibly including magnetic charges for each of the gauge fields in the 4d theory. 3 Action for the theory One of the essential ingredients for performing the extremization in [8] was the existence of an action which gave rise to the equations of motion of the theory. This action was derived in [36] for the near-horizon geometry of static black holes and strings in M-theory and Type IIB respectively. As a first step towards performing the extremization in the rotating case we will construct the analogous rotating action. Thereafter we impose the supersymmetry constraints on this action and show that it reduces to a simple and familiar form. The action computes the entropy of these black holes. Non-supersymmetric action The simplest method for constructing an action for the 9d geometry is to reduce the 11d action using our ansätze. By construction the equations of motion of the resulting 9d action will match the ones obtained in the section 2.3. We start from the action of eleven-dimensional supergravity Here C 3 is the three-form potential and G 4 = dC 3 is its field strength. Using the ansätze we reduce this action to 10d. The Bianchi identity dG 4 = 0 implies that We write these field strengths in terms of their potentials as Now we can write G 4 into the convenient form where we introduce the shifted four-form field strength Note that although we add a superscript to indicate that upon imposing supersymmetry this field strength is a (2, 2)-form, at the moment we have not imposed supersymmetry yet so we have to treath (2,2) as a general four-form. The 11d potential C 3 can now be expressed in terms of the 10d potentials as By using these ansätze and definitions, we find the 10d Lagrangian Next we want to consider the reduction of this Lagrangian to 9d, by using the cone ansatz presented in (2.2). In addition, we want to split off the η-direction from the 8d space B so that we end up with a 9d Lagrangian density of the form L 9 = η ∧ (. . .) where the dots represent an expression in terms of fields defined on B. The relevant ansätze for this reduction are 11 Performing this reduction is a lengthy but in principle straightforward calculation. We find the 9d Lagrangian 12 (3.10) From this action one can derive the equation of motions that define the solutions discussed in the previous section 2. 3. Note that we have not imposed any supersymmetry in deriving this action. Supersymmetric action Here we consider the restriction of the Lagrangian obtained above to off-shell supersymmetric geometries. We say these 9d geometries are off-shell because we do not impose the equations of motion such as (2.52), and supersymmetric since we do impose the supersymmetry constraints discussed in section 2. We will see that the Lagrangian (3.10) becomes quite simple once supersymmetry has been imposed. The simplest method is to impose supersymmetry in 10d and subsequently reduce to 9d, instead of starting from the 9d Lagrangian (3.10). We begin with the 10d non-supersymmetric 12 We split off the r-coordinate as L10 = L9 ∧ r −2 dr. Splitting off dr on the left side would give an overall minus sign. Lagrangian Here we can readily plug in the susy conditions Furthermore, we use the decompositions to write out the Hodge stars By combining all these results, we find the 10d supersymmetric Lagrangian Here we also used that w 4 = j dj = 3 d log ∆ − 8 dφ. We reduce this Lagrangian to 9d using the ansätze ds 2 10 = dr 2 + r 2 η 2 + r 2 e −3B−C/3 ds 2 8 , e 2φ = r −3 e 3B+C , ∆ = r −1 e B+C , j = r η ∧ dr + r 2 e −3B−C/3 J , (3.19) and find (again using L 10 = L 9 ∧ r −2 dr) (3.20) We simplify this expression using the supersymmetry conditions J dη = e −3B−C/3 , as well as the relation between the Ricci and the Chern-Ricci scalar This yields the surprisingly simple result Note that this is the same expression for the 9d supersymmetric action as was obtained in the nonrotating case in [8]. A subtle difference is that dη = ρ here, but rather ρ = dη + 1 3 dd c C. However since the forms ρ and dη are in the same cohomology class this distinction does not matter. Observe where the first term equality uses the fact that J, dJ and d c C are basic 13 with respect to to the R-symmetry vector ξ, and the second equality follows since the first term is a total derivative and the second vanishes because J is balanced. We conclude that we may replace dη by ρ in expression (3.23) and therefore the integrals for computing the supersymmetric action, and therefore the entropy in both the rotating and non-rotating cases are exactly the same. Later in section 5 we will discuss how one can obtain near-horizon geometries of rotating black strings in Type IIB from the 11d setup considered so far. In anticipation of this, let us reduce the 9d action for geometries on the M-theory side to a 7d action for geometries on the Type IIB side. These 13 A form β is basic with respect to ξ if it satisfies both ξ β = 0 and L ξ β = 0. 7d geometries can be obtained from the 9d geometries by requiring that the 9d geometry admits a two-torus. By using the ansatz (5.2) we find the supersymmetric Lagrangian for the 7d geometry to be L SUSY Let us point out that one can replace dη by ρ (6) in the 7d Lagrangian only when τ is constant. Namely, for a non-trivial axio-dilaton profile the term dQ appearing in (5.6) is only locally exact, and therefore cannot be interpreted as a total derivative term as was the case for the dd c C term. As studied in [15] it is more convenient to view these near-horizon geometries from an 11d perspective rather than a 10d one. The central charge of the dual 2d SCFT is given by Embedding of the AdS Kerr-Newman black hole Here we study the embedding of the supersymmetric limit of the AdS 4 Kerr-Newman (KN) black hole solution found in [37] and further studied in [38,39] into our classificatio by using the uplift of minimal gauged supergravity on an arbitrary 7d Sasaki-Einstein manifold. Note that we could have taken one out of the zoo of supersymmetric rotating AdS 4 solutions, e.g. [32,[40][41][42]. We choose the KN solution since it is the simplest yet contains all the necessary ingredients. We begin by considering the black hole in four dimensions before studying the full eleven-dimensional solution. Kerr-Newman solution The four-dimensional black hole is given by wherer = r + 2m sinh 2 δ , W =r 2 + γ 2 cos 2 θ , The solution is characterised by three constants (γ, δ, m) whilst the parameter α gauge is related to a pure gauge transformation and is therefore not a parameter of the solution. The solution describes a non-extremal black hole provided that γ 2 < 1 and m is bounded from below. The exact value of the bound is not important for our purposes, but it is derived in [43]. Without loss of generality we have m, δ, γ > 0. The black hole is characterized by its energy E, electric charge Q and momentum J: The Bekenstein-Hawking entropy of the black hole can be found by computing the area of the outer horizon, resulting in where r + denotes the largest positive root of ∆ r = 0, and therefore describes the location of the outer horizon. For arbitrary values of the parameters (γ, δ, m), the black hole is neither extremal nor supersymmetric. The BPS limit is defined by first imposing supersymmetry and then extremality. The supersymmetry is attained by imposing The solution is now supersymmetric but not extremal, in fact it has timelike closed curves and a naked singularity. To remedy this and obtain an extremal black hole we further identify There is now only a single parameter left in the theory, namely γ. With these identifications the function ∆ r acquires a double root at with the other two roots becoming complex. Near-horizon limit We now want to take the near-horizon limit of the solution. It is convenient through a change of coordinates to shift the double root location in ∆ r to 0 and to rewrite the function as where ρ = r − r * . (4.10) Since we will need to evaluate the function f at the horizon often, we note that f (0) = 1 + γ (6 + γ) . (4.11) In the metric, the change of the r to ρ coordinate results only in changes in the functions (4.3), since the dr term is invariant. To simplify notation we will therefore shift the functions such that an argument of 0 means we evaluate at the horizon. In particular we now takẽ r(ρ) = ρ + r * + 2m sinh 2 δ , (4.12) such thatr(0) is evaluating the functionr at the horizon. Similarly W (0, θ) evaluates W at the horizon; for notational convenience we denote the functions W (0, θ) = W (θ) and f (0) = f 0 . Furthermore, in the BPS limit one can derive thatr(0) = √ γ. To take the near-horizon we perform the change of coordinates where β is a constant that we will determine shortly an then send → 0. The near-horizon limit is now obtained by taking → 0 after making the above substitutions. The dθ 2 term will clearly be sent to W (θ)/∆ θ , and we can ignore this term for time being. We find In the last line we can expandr( ρ) ∼ √ γ + ρr (0) + O( 2 ), resulting in a term which diverges as −1 , proportional to the constant The existence of this term is the reason we introduced the shift in the φ coordinate, and it can be set to zero by fixing the constant β in the shift as Including this factor of β we can combine the results from above and write down the final result for the near-horizon solution where, in order to make the AdS 2 factor manifest, we rescaled the time-coordinate Consider now the gauge field. Performing the same near-horizon limit and imposing the BPS limit, we find a divergent term in the gauge field, proportional to This term is purely gauge and we can remove it without problem by making a suitable choice for the gauge parameter. The resulting near-horizon vector field is where of course the time coordinate has been rescaled with the same factor (4.18) as in the metric. Uplift to 11d Now that we have derived the near-horizon metric and gauge field of the AdS 4 KN solution in minimal supergravity we can consider the uplift to 11d supergravity. The uplift of the metric and flux to eleven dimensions are given by ds 2 11 = ds 2 4 + η + 1 4 A 2 + ds 2 6 , where F = dA is the field strength of A, ds 2 4 is the near-horizon metric we just derived in (4.17) and ds 2 6 is the base of the Sasaki-Einstein manifold with η = dz + σ dual to the Reeb-vector ∂ z . The conventions are chosen such that dη = 2J, where J is the Kähler form on ds 2 6 . We now want to rewrite the metric and flux appearing in (4.21) in the form of our classification as presented in section 2.3. To recover this form, we write the metric in (4.21) such that it becomes a time-fibration over a base. It is also necessary to perform some coordinate redefinitions (4.22) After completing the straightforward but tedious rotations of the vielbeins and shifting the coordinates, the metric we find is of the following form (4.23) We will now clarify the several notational conventions used in this metric. Firstly, we have renamed the coordinate ρ to r, in order to conform with the conventions of the classification. We have also introduced the function Y , Dφ and redefined η as Dφ . (4.24) Note that the coordinate shift we made in (4.22) was necessary to ensure that the metric ends up with dz 2 , with its coefficient being exactly equal to one. The scalars e C and e B are found to be Recall that these scalars can also be used to compute ∆ in (2.20). The last remaining puzzle-piece in the metric is the fibration a, which is given by The fibration is of the expected form a = r(α η + A), and this specification of a completes the endeavour of writing the metric in the classification form. Now we can move on to consider the flux; recall that in the classification we wrote it as (2,2) , h (2,2) = dc 3 . (4.28) We have already found the fibration a in (4.27) and e 2φ is given in terms of the scalars e B and e C , by making use of (2.20). The ten-dimensional complex structure form j can be found from the vielbeins of the metric we found in (4.23). Our remaining tasks thus consists of finding an expression forh (2,2) , which in its turn is determined by the potential c 3 . The only form we have thus not yet specified is the potential c 3 . After carefully rewriting the flux we obtain from (4.21), the resulting potential is given by where the AdS 2 is now clearly visible. As before, ds 6 denotes the base of the Sasaki-Einstein manifold and the one-form σ is still defined on the Kähler-Einstein space ds 6 such that dσ = 2J. Apart from these already familiar notions we established several new notational conventions; first of all we have introduced γ θθ and M µ (θ) as , (4.32) . (4.33) Besides these coefficients we introduced indices µ, ν ∈ {z, φ}, along with a metric γ µν we will specify below and, finally, defined dψ as The metric (4.30) shows that only the φ and z coordinates are gauged over the AdS 2 space. We could have expected this, since the original AdS 4 black hole had rotation only in the φ direction, and in (4.21) we have gauged the Reeb-vector with respect to the four-dimensional gauge vector. The metric, γ µν , we introduced for these two coordinates has the following components where, to alleviate the notational clutter, we have introduced the constant κ and the function N (θ) (4.38) Now that we have specified the γ µν in (4.30), the description of the metric is almost complete. The last remaining unknowns are the constants k i which specify the gauging over the AdS 2 . We find . rotates in the z and φ directions. We find (4.40) It is then a simple matter of substituting these and the constants k i found in (4.39) to see that both Black strings in Type IIB Having studied our 11d setup we now turn our attention to rotating black string solutions in Type IIB supergravity. We take our 11d setup and require that the internal space admits a two-torus, T 2 . The 8d balanced manifold then breaks up as a semidirect product of this torus and a 6d manifold. Wherever 8d quantities split up in components on the torus and the 6d manifold, we simply denote this with the subscripts (2) and (6). Under this assumption of a torus in the internal space we can apply dualities to arrive in Type IIB, where we find a classification of rotating black string solutions that can be interpreted as rotating D3-branes wrapped on a Riemann surface. If we add a warp factor acting homogeneously on the torus, the balanced condition of the 8d manifold implies that the 6d manifold is conformally balanced. For simplicity we do not take into account such a warping which gives a balanced 6d manifold. As such we take the metric ansatz where τ 1 and τ 2 are scalars valued on the 6d base and the complex combination τ = τ 1 + iτ 2 is a holomorphic function (∂τ = 0). In principle we can take the two U(1)'s of the two-torus to be fibered over AdS 2 , i.e. in the language of appendix A we can introduce constants k x , k y which are related to the angular momenta in these directions. However, introducing these parameters leads to the system becoming unreasonably complicated 14 once we arrive in Type IIB, and therefore we shall just proceed with these parameters set to zero, which in (5.1) implies that A (2) = 0. In addition, we also assume that η has no dependence on the T 2 . The final piece of the solution we need to specify is the dependence of the flux on the torus: we takeh (2,2) to have no legs along the torus directions. 15 Note that this is consistent with setting the rotation of the solution along the torus directions to zero, through the condition (2.18). In addition to this, we assume that the scalars B, C are independent of the torus coordinates, and are hence defined on the 6d base. We now reduce the 8d conditions from section 2.3 with this assumption of a torus in the internal space onto a set of conditions on the inherited 6d base space that has an SU(3) structure. We decompose the two-form as which (using (2.34)) implies that J (6) is a balanced two-form: dJ 2 (6) = 0. Furthermore from (2.35) we find that 3) 14 In Type IIB these extra parameters will lead to a further warping of the metric. In particular, the dilaton will not be simply the dilaton one would get from the F-theory picture, i.e. τ −1 2 . In addition, since we must satisfy (2.18) it is clear that turning these on will lead to turning on additional fluxes other than the self-dual five-form in Type IIB. It would be interesting to fully work out the details of this more general case, but it deserves more than this small section in this paper and a full treatment of the most general construction. 15 The primitive piece of this part of flux (with legs on the torus) will give rise to a transgression term like in [19,44]. Again for our purposes such a term is an unnecessary complication, and so we set it to zero here, although it is certainly interesting to consider. which implies J (6) dη = e −3B−C/3 . We write the holomorphic four-form as Ω = Ω (6) ∧ Ω (2) , From (2.37) it now follows that dΩ (6) where Q = − 1 2τ 2 dτ 1 and we have used the holomorphicity of τ . This gives us the Ricci form on the 6d space as which is the generalization of equation (2.57) of [28] to the rotating case. The additional term changes the expression for the Chern-Ricci scalar to We can now proceed by reducing along the A-cycle (dx + τ 1 dy) of the torus to Type IIA supergravity. Note that the Ricci form is independent of the T 2 -coordinates and therefore so is the one-form η. This leads to a standard reduction of 11d supergravity to massless Type IIA. One finds that the metric in string frame is given by and is supplemented by e 4φ IIA /3 = 1 τ 2 e −B−C/3 , (5.10) Recall that we can decompose the 11d gauge potential as (3.7), where c 3 is the potential corresponding toh (2,2) , and c 2 = e 2φ j is fixed by supersymmetry. By performing a T-duality along the y-direction we land in Type IIB. The metric in Einstein frame reads ds 2 IIB = e 3B/2−C/6 − e 2C dt r + αη + A (6) 2 + e 2C/3 dy + e 2C/3 dt r + αη + A (6) 2 + 1 r 2 dr 2 + r 2 η 2 + e −3B−C/3 ds 2 6 . (5.14) Here we have made explicit a cone in the geometry. It is useful to redefine the scalar B in the form B = − B/3 + C/9 which puts the metric in the form If we take C = α = A (6) = 0, the first line gives precisely the metric for AdS 3 written as a U (1) fibration over AdS 2 . The effect of a non-trivial scalar C and connection pieces α, A (6) is to make the black string rotate. Note that this is precisely the form of the near-horizon of the black string found in [24] uplifted to a 10d solution of Type IIB. The fluxes consist of an axio-dilaton and five-form flux given by Having given the metric and fluxes we now specify the supersymmetry conditions that the geometry must satisfy. These can be derived from the 11d supergravity ones by reducing them on the torus. Note that the cone appearing in the metric in (5.14) has an SU(4) structure which is inherited from the SU(5) structure of our 11d solutions. We denote the corresponding two-form by j (8) , and we can decompose it as where J (6) is the two-form that we found in the decomposition (5.2). This two-form corresponds to the balanced SU(3) structure of the 6d space. On this SU(3) structure, we previously found the conditions: The geometry must in addition satisfy the Bianchi identities and Maxwell equations that we discussed earlier in this section subject to the potential c 3 satisfying The first of the Maxwell equations (5.8) is the master equation, which can be rewritten as Note that the master equation is independent of the fluxes here. Further, notice that the conditions reduce to those of [27] if one sets C = α = A (6) = c 3 = 0. The solutions in this classification may be interpreted as the near-horizon geometries of rotating black strings. When one inserts a Riemann surface into the balanced 6d base it is natural to interpret these as arising from the compactification of rotating D3-branes on the Riemann surface. Moreover, this is not the most general setup that can be considered and it would be interesting to further investigate extensions. A possible method for doing this is to reduce the 11d setup studied here on a torus which is also fibered over the AdS 2 , as we alluded to at the beginning of this section. This will necessarily lead to two free constants in the Type IIB solution and also to more general fluxes. However, such solutions are far more involved than the ones presented in this section. Conclusions and future directions In this paper we studied the geometry of supersymmetric solutions which may be interpreted as the near-horizon of rotating black holes and strings embedded in 11d supergravity and Type IIB respectively. This generalizes the results of [18] and [27]. Due to the generality of our ansatz the black holes covered by our classification can include both electric and magnetic flavour fluxes and angular momentum when viewed from 4d. Note that this does not translate into magnetic fluxes in 11d but rather into fibrations of the manifold. 16 Similar statements apply for the 5d black strings in Type IIB that we considered. 16 The role of baryonic symmetries is slightly more mysterious but we believe that these should also be covered by our work. One natural extension of our work is to consider a more general classification of the black strings in Type IIB. In performing the duality chain we aimed for a simplified solution consisting of only five-form flux and axio-dilaton. One could in fact include a complex three-form flux in the setup. This may be achieved from 11d by allowing the flux components ofh (2,2) to have legs along the torus directions. The minimal extension would be adding in a transgression term of the form discussed in [19], however we expect that one can be more general by also allowing for rotation along the T 2 -directions. A preliminary analysis showed that this case is rather involved with all fluxes turned on and a non-holomorphic axio-dilaton. For the sake of presentation we have given only the simpler case. It would be interesting to formulate an extremization principle for these geometries along the lines of [8]. This seems quite challenging though there are glimpses of hope. The entropy of the black hole and string can be seen to be given by the same formula as in the non-rotating case. In particular the actions presented in section 3.2 reduce to simple integrals (3.25) and (3.27) which can easily be computed in the toric case. The difficulty arises in evaluating the integrals which impose flux quantisation. One should be able to compare with the field theory results in [23,24] for rotating black holes and black strings. We have preliminary results on this extremization problem and plan to present these in the future. Some alternative and intriguing avenues are to attempt to perform a similar analysis for Euclidean black saddles [45], for other rotating black hole solutions and to include higher derivative corrections [46][47][48]. There are many results with which one could compare for black holes in other theories, for example [49][50][51][52][53][54][55][56]. It would also be desirable to understand the connection with Sen's entropy function [57][58][59] and whether one can perform a similar classification for near extremal black holes [60][61][62]. Acknowledgments It is a pleasure to thank Stefan Vandoren and Thomas Grimm for useful discussions. A Black hole near-horizons and observables In this appendix we will study the general form of the near-horizon of a black hole. This analysis serves two purposes. Firstly it will motivate the ansatz we take in section 2.2 for the 11d supergravity solution, in particular the warping of the metric and the temporal fibration. Despite this, in the main text we will use a more general ansatz to the one motivated here purely for convenience of the notation. It is understood that one must impose an additional constraint on the geometry in order for it to be the near-horizon of a black hole as we will show later in this section. The second purpose for this analysis is to determine how to evaluate the physical observables for our solution. The parametrization of the metric which is most useful for obtaining the conditions arising from supersymmetry is not the one that is most useful for defining the observables such as the entropy and angular momentum of the black hole where an explicit AdS 2 factor is used. The analysis of this section will allow us to translate between the two view-points and compute observables easily from the form of the metric obtained from supersymmetry. A.1 General near-horizon metric The general form of the metric for the near-horizon of a rotating black hole is [63] 17 (see also [17] and references therein) Here φ are periodic coordinates and k µ are constants related to the near-horizon value of the chemical potentials of the angular momentum of the black hole. The functions of the metric all depend on the y coordinates and are independent of the φ's. We do not need to specify the ranges of the indices M and µ for the argument but let the range of µ ∈ {1, . . . , n} 18 . Note that the first two entries of the metric are precisely the metric on AdS 2 with unit radius. Moreover it is clear from this form that there is an SO(2, 1) × U(1) n isometry. 19 The metric in this form is useful for computing the observables of the black hole however it is not as useful when trying to impose supersymmetry. Due to the gauging over AdS 2 it is finicky to try to implement SUSY preservation in this form. It is known that supersymmetry in 11d supergravity imposes that a metric admits either a timelike or null Killing vector [31,64]. Since the form of the 17 We have made some trivial redefinitions to the form of the metric appearing in [63], in particular we have changed coordinates on AdS2 from Gaussian Null coordinates to Poincaré coordinates and extracted an overall factor from each of the sub metrics. 18 Note that n cannot be zero otherwise the black hole is not rotating and we fall into the class of solutions given in [18]. 19 The SO(2, 1) algebra of the metric in these coordinates is realised by the three Killing vectors where the ψ i denote the U(1) symmetries of the internal manifold. Note that the generators are twisted with respect to the U(1) symmetries of the internal manifold which are gauged over the AdS2. It is important that the twisting parameters, the which is precisely the algebra of the conformal group in 1d and commutes with the isometries of the internal manifold. metric we are considering above has a time-like Killing vector we will focus on this case 20 . It is then useful to rewrite the metric so that the timelike Killing vector is manifest. This will lead to the time-direction being fibered over the remaining directions. A small rearrangement puts the metric into the form The metric now exhibits the timelike Killing vector in a simple form. It is then natural to take as ansatz 21 for the near-horizon, with an r-independent one-form on the 9d base. In this rotated form the AdS 2 factor is obscured, however as we mentioned previously this form is far more amenable to imposing supersymmetry. However this ansatz does come with some downsides. Firstly computing observables, such as the horizon area are not nearly as clear as in the form given in the ansatz (A.1). Moreover it is not clear which solutions can be identified with the near-horizon of a rotating black hole from the form in (A.3), in particular the scalar C is arbitrary in our ansatz whilst its analogue in (A.2) is constrained. We shall study this constraint shortly however in the main text we shall refrain from imposing it for as long as possible. We will see that we can proceed unabated in the classification without needing to impose such a condition. A.2 Constraints from the near-horizon In this section we shall look at the additional constraints imposed on the metric ansatz used in the main text which follow from it being the near-horizon of a black hole. We shall compare our ansatz with the general form of the near-horizon given in the previous section, rewriting the expressions in terms of quantities adapted to the metric in the form of the classification. The classification implies that the metric takes the form where we have written the metric with the same splitting as earlier. It is trivial to identify e 2B = Γ(y) , e 2C = 1 − |k| 2 γ , −e −2C k µ γ µν dφ ν = αη + A , g µν = γ µν + k σ γ σµ k ρ γ ρν 1 − γ κτ k τ k κ . (A.5) 20 One could also have attacked the problem using the null Killing vector of AdS2. The benefit of using the timelike Killing vector is that it is transferable to the case of black strings in Type IIB and so we pursue this choice here. 21 We change the radial coordinate as r → r −1 in order to write the transverse directions to the timelike foliation as a cone in the main text. Note that we have defined | · | γ to be the norm with respect to the metric γ, similarly we let | · | g denote the norm with respect to g. Simple manipulations of these definitions gives Note that this implies we can constrain the scalar C in terms of data of the fibration, in particular Finally rewriting this in terms of the full metric of the classification we find the condition where the final norm is with respect to the metric on the balanced manifold. Let us further analyse the condition on the fibration in the time-direction. We have Therefore in order to specify α and A we should specify η, the metric g µν and a set of constants k µ . These constants k µ are related to the near-horizon values of the chemical potentials of the angular momentum of the black hole (when viewed from 11d). As a final step let us rewrite the metric used in the arguments above so that the R-symmetry vector is manifest. We want to identify G mn dy m dy n + g µν dφ µ dφ ν ≡ (dz + P ) 2 + e D ds 2 8 . (A.15) Therefore given a vector of constants parametrising the rotation and the internal metric one can construct αη + A. In fact if one imposes that the internal manifold is toric one may write the gauge field in a simple way as we have explained in section 2.4. A.3 Observables Let us now use the near-horizon solution to study what observables we can compute. The three main observables are the entropy of the black hole, the angular momentum and its electric/magnetic charges, all of which can be computed in the near-horizon. One may also ask if it is possible to compute the electrostatic potential and angular velocity, however these observables require some knowledge of the UV data since they are defined as 16) In this section we will focus on rephrasing the computation of the entropy, electric charges and angular momentum in terms of integrals over various cycles of the internal manifold. Entropy First consider the entropy of the black hole. The entropy is given up to normalization by the area of the horizon of the black hole. In order to compute the horizon area one should write the metric so that a bona-fide AdS 2 factor appears in the metric and the internal manifold is fibered over this. Clearly in order to compute the entropy in this way the metric of use to us is the one given in (A.1) and not the one that naturally comes out from supersymmetry. With this rewriting the horizon manifest and the entropy is given by where the Newton's constant is that of a 2d theory admitting the AdS 2 near-horizon as a vacuum solution. In order to compute the Newton's constant (at leading order, we will not make any comments about subleading corrections though these are certainly very interesting) we should look at reducing the 11d Einstein-Hilbert term of 11d supergravity on the AdS 2 background in (A.1). We Γ(y) 9 2 det(G) det(γ)dy ∧ dφ ≡ 1 G 2 M 2 R 2 dvol 2 , (A.18) from which we identify 1 G 2 = 1 G 11 Y 9 Γ(y) 9 2 det(G) det(γ)dy ∧ dφ = 1 G 11 Y 9 Γ(y) Let us now translate this result into the notation of the metric arising from supersymmetry, namely (A.2). We expect that the difference is precisely a warping of the volume form which indeed turns 22 To save cluttering the notation we let dy ∧ dφ denote dy m ∧ dφ µ . (A. 22) It follows that dvol SUSY = Γ(y) 9 2 det(G) det(γ) √ 1 − γ κ γ τ γ κτ dy ∧ dφ , (A. 23) and therefore 1 G 2 = Y 9 Γ(y) Our proposal for computing the entropy is therefore where we used (2.41) in the final equality. As discussed in section 3.2 this is precisely the same formula as the entropy in the non-rotating case. One should view this section as a proof that the quantity computed in section 3.2 really is the entropy of the black hole. Electric charges Next let us consider the quantization of the four-form flux which will give rise to the electric charges of the theory. In the presence of a Chern-Simons term there is more than one definition of a charge. One can consider the gauge-invariant but non-conserved charge Q = 1 (2π p ) 6 Σ 7 * 11 G 4 , (A. 26) where we integrate over all compact seven-cycles of the geometry. Alternatively the Page charge is conserved by application of the Maxwell equation but is not gauge invariant due to the bare potential appearing in the definition. In the following we will consider only the Page charge since it defines a conserved charge. In order to be able to write this charge we must be able to at least locally write the four-form flux in terms of a potential three-form. This is equivalent to the requirement thath (2,2) as defined in (2.14) can be written (at least locally) in terms of a potential. In fact, if we demand that it is exact, i.e. that the potential is a globally defined three-form, it follows that there is no M5-brane charge. Substituting our ansatz into the Page charge we find where we have introduced the potentials H (1,1) = dC (1) , i(H (2,1) − H (1,2) ) = dC (2) , (A.29) H (2,2) = dC (3) = dη ∧ C (2) . Angular momentum We now want to find a similar formulation for computing the angular momentum of the black hole. To such an end we may use the results of [26], (see also [65] for the analogous computation for 5d black rings), which gives the formula for computing the Komar integral for the Noether current of a Killing vector, ξ in 11d supergravity. By an abuse of notation we will also call the dual one-form ξ. The angular momentum is then given by where the three-form potential C 3 should be chosen so that it has vanishing Lie derivative along the given isometry. Since this formula is dependent on the choice of Killing vector we will refrain from writing this more explicitly and just include it for completeness.
14,694.4
2020-11-13T00:00:00.000
[ "Physics" ]
How to reveal metastable skyrmionic spin structures by spin-polarized scanning tunneling microscopy We predict the occurrence of metastable skyrmionic spin structures such as antiskyrmions and higher-order skyrmions in ultra-thin transition-metal films at surfaces using Monte Carlo simulations based on a spin Hamiltonian parametrized from density functional theory calculations. We show that such spin structures will appear with a similar contrast in spin-polarized scanning tunneling microscopy (SP-STM) images. Both skyrmions and antiskyrmions display a circular shape for out-of-plane magnetized tips and a two-lobe butterfly contrast for in-plane tips. An unambiguous distinction can be achieved by rotating the tip magnetization direction without requiring the information of all components of the magnetization. I. INTRODUCTION Magnetic skyrmions were recently observed via neutron diffraction in bulk chiral magnets such as MnSi 1 and in the multiferroic material Cu 2 OSeO 3 2 . Currently, they are attracting an enormous attention due to their stability 3,4 and their displacement speed upon applying electrical currents which makes them suitable for technological applications 5,6 . Real space observation of skyrmions in FeGe and Fe 0.5 Co 0.5 Si thin films has become possible using Lorentz microscopy and magnetic force microscopy 7-10 and more recently in transition-metal films using spin-polarized low-energy electron microscopy 11 and magneto-optical Kerr effect (MOKE) measurements 12 . In ultra-thin films of a few monolayers, the skyrmion diameter can shrink down to a few nanometers and spinpolarized scanning tunneling microscopy (SP-STM) 13,14 is a powerful tool for their observation and manipulation [15][16][17] . SP-STM is sensitive to the projection of the local magnetization density of states of the sample onto the magnetization direction of the tip 18 and does not allow a direct determination of the three magnetization components in a single measurement. However, in most experimental setups it is not possible to continuously rotate the tip magnetization direction and conclusions have to be drawn from SP-STM experiments performed with only one or two tip magnetization directions. Such measurements only allow a partial determination of the spin structure 16,19,20 . It is therefore essential to know (i) if the skyrmion ground states and some metastable states can be differentiated via simple SP-STM experiments (based on one or two tip magnetization directions) and (ii) to establish a clear proposal in order to discriminate between the different possible chiral spin structures via SP-STM. II. METHODS The occurrence of metastable skyrmionic spin structures is studied in a single atomic layer of Pd in fcc stacking on the fcc monolayer Fe on the Ir(111) surface denoted as Pd(fcc)/Fe/Ir(111). This system has been studied experimentally using SP-STM 16,17 and from first-principles calculations 21,22 which allow to understand the transition from a spin spiral to a skyrmion and a ferromagnetic (FM) phase in an external magnetic field. We numerically solve the spin Hamiltonian using Monte-Carlo (MC) simulations with parameters obtained from density functional theory calculations 21 : with exchange constants J ij , the vector D ij of the Dzyaloshinskii-Moriya (DM) interaction, the magnetocrystalline anisotropy K and an external magnetic field 23 . We have obtained metastable states in MC simulations by relaxing a super cell of 100×100 spins on a two-dimensional hexagonal lattice starting from a random spin configuration at 1 K under a magnetic field of 20 T with a standard Metropolis algorithm. At this field value we are in the region where the skyrmions are metastable in the ferromagnetic background 21 . (c) Skyrmion density of the spin structure (color contrast). The spin structure was obtained with a super cell of 100×100 spins on a hexagonal lattice at a temperature of 1 K after a relaxation with 10 7 MC relaxation starting from a random spin configuration at a magnetic field of B = 20 T, i.e. in the region of the phase diagram in which the ferromagnetic state is the ground state. We have simulated SP-STM images of the spin structures obtained from MC using the model described in Ref. 24 . The tunneling current is given by where R T is the tip position, the sum extends over all surface atoms α, the vacuum tail of a spherical atomic wave function is approximated by h(r) = exp (−2κ|r|), and the decay constant is given by κ = 2mφ/h 2 with the work function φ. P S and P T denote the spin-polarization of sample and tip atoms, respectively, and θ α is the angle of the magnetization of atom α with respect to the tip magnetization direction m T . Figure 1(a) shows a simulated SP-STM image with an out-of-plane magnetized tip (P eff = P T P S = 0.4) of the spin structure at z = 8Å from the surface. The image shows a brighter contrast for the FM background with several black spots. All darker spots have a round shape and could correspond to skyrmion spin structures. However, when the tip magnetization is changed from out-of-plane to in-plane ( Fig. 1(b)), the simulated SP-STM image shows two types of contrast compatible with recent observation of skyrmion 17 . The first contrast has a two-lobe pattern with one brighter and one darker side. The lobes can be aligned along the x axis or are rotated with respect to it. The second type of contrast has four lobes and appears seldom. It does not seem to have a preferred alignment. III. RESULTS The topological character of different spin structures is given by their winding or skyrmion number: where m is the unit vector of the local magnetization and S can take only integer values. identified in Fig. 1 in separate panels. Fig. 2(a) shows a right-handed skyrmion i.e. S = 1 which is metastable for Pd(fcc)/Fe/Ir(111) at magnetic field values higher than 16 T 21 . For completeness, we also consider a left-handed skyrmion, Fig. 2(b), which has a skyrmion number of S = 1 as well but exhibits an opposite chirality and is unstable. Fig. 2(c) shows an antiskyrmion (S = −1) that is characterized by a change of chirality for two high symmetry directions, i.e. the rotational sense changes from right-to left-handed every 90 • . Fig. 2(d) displays a higher-order antiskyrmion with S = −2 which was recently also reported in Ref. 26 . In that case, the rotational sense changes every 60 • . in this imaging mode. When the tip magnetization changes to in-plane, the images of the righthanded skyrmion (Fig.3(b)), the left-handed skyrmion (Fig. 3(d)) and the antiskyrmion (Fig. 3(f)) 6 are still very similar. The simulated SP-STM images of a skyrmion and an antiskyrmion could only differ by a rotation as seen Fig. 1(b). The only spin structure that can be easily distinguished is the higher-order skyrmion due to the multiple nodes of the contrast (cf. Fig. 3(h)). Note, that the simulated SP-STM images of the right-handed skyrmion for both magnetization directions are in good agreement with the experiments of Romming et al. 16,17 . Fig 3), we obtain line profiles which are also in quantitative agreement with experimental data 17 When the tip magnetization is switched to in-plane, the line profiles show the same behavior for the right-handed skyrmion, the left-handed skyrmion and the antiskyrmion (Fig. 4(a-c)). Since the contrast of the SP-STM images of the skyrmion and antiskyrmion are only rotated with respect to each other and the corrugation amplitudes are very similar, they can only be distinguished in experiments when both spin structures are present simultaneously. On the other hand, the higherorder skyrmion (Fig. 4(d)) can be easily discriminated due to the presence of multiple nodes of the magnetization density also seen in Fig. 3(h). ��� In order to distinguish between the skyrmion and the antiskyrmion spin structures, we propose an experiment based on a 3D vector field available in STM experiments 27 . Such a field enables a rotation of the tip magnetization both within the surface plane as well as from in-plane to out-ofplane. Although all in-plane magnetized tips result in the same contrast i.e. a butterfly with a bright and a dark lobe (as shown in Fig. 3), the behavior of this contrast when the tip changes its direction is different as shown in Fig. 5. For a right-handed skyrmion, the lobes will rotate in phase with the tip magnetization direction (thick black arrows). In the case of an antiskyrmion, a clockwise rotation of the in-plane component of the tip will induce a counterclockwise rotation of the lobes. Therefore, in-plane rotation of the tip magnetization allows an unambiguous distinction between the right-handed skyrmion and the antiskyrmion. On the other hand, in order to distinguish a left-and right-handed skyrmion the tip magnetization must be rotated from the in-plane to the out-of-plane direction. IV. CONCLUSION In conclusion, we have demonstrated that it is non-trivial to distinguish via SP-STM between metastable spin structures at surfaces that differ by their chirality and/or topological charge. Skyrmions and antiskyrmions exhibit a spherical shape in SP-STM using tips with an out-of-
2,074.8
2016-04-18T00:00:00.000
[ "Physics" ]
Zeta functions of alternate mirror Calabi-Yau families We prove that if two Calabi-Yau invertible pencils have the same dual weights, then they share a common factor in their zeta functions. By using Dwork cohomology, we demonstrate that this common factor is related to a hypergeometric Picard--Fuchs differential equation. The factor in the zeta function is defined over the rationals and has degree at least the order of the Picard--Fuchs equation. As an application, we relate several pencils of K3 surfaces to the Dwork pencil, obtaining new cases of arithmetic mirror symmetry. Introduction 1.1. Motivation. For a variety X over a finite field F q , the zeta function of X is the exponential generating function for the number of F q r -rational points, given by In his study of the Weil conjectures, Dwork analyzed the way the zeta function varies for one-parameter deformations of Fermat hypersurfaces in projective space, like the pencil (1.1.1) x n+1 0 + · · · + x n+1 n − (n + 1)ψx 0 x 1 · · · x n = 0 in the parameter ψ. In his 1962 ICM address [Dwo62], Dwork constructed a family of endomorphisms whose characteristic polynomials determined the zeta functions of the hypersurfaces modulo p. Furthermore, he identified a power series in the deformation parameter with rational function coefficients that satisfies an ordinary differential equation with regular singular points. In fact, this differential equation is the Picard-Fuchs equation for the holomorphic differential form [Kat68]. The pencil (1.1.1) is a central example in both arithmetic and algebraic geometry [Kat09]; we label this family F n+1 . On the arithmetic side, Dwork [Dwo69] analyzed F 4 in detail to explore the relationship between the Picard-Fuchs differential equation satisfied by the holomorphic form on the family and the characteristic polynomial of Frobenius acting on middle-dimensional cohomology. Dwork identifies the reciprocal zeros of the zeta function for this family of K3 surfaces explicitly by studying p-adic solutions of the Picard-Fuchs equation. This analysis motivated Dwork's general study of p-adic periods. On the algebraic side, the family of Calabi-Yau threefolds F 5 has been used to explore the deep geometric relationship known as mirror symmetry. Mirror symmetry is a duality from string theory that has shaped research in geometry and physics for the last quarter-century. Loosely defined, it predicts a duality where, given a Calabi-Yau variety X, there exists another Calabi-Yau variety Y , the mirror, so that various geometric and physical data is exchanged. For example, Candelas-de la Ossa-Green-Parkes [CDGP91] showed that the number of rational curves on quintic threefolds in projective space can be computed by studying the mirror family, realized via the Greene-Plesser mirror construction [GP90] as a resolution of a finite quotient of F 5 . Combining both sides, Candelas, de la Ossa, and Rodriguez-Villegas used the Greene-Plesser mirror construction and techniques from toric varieties to compare the zeta function of fibers X ψ of F 5 and the mirror pencil of threefolds Y ψ [CDRV00, CDRV01,CD08]. They found that for general ψ, the zeta functions of X ψ and Y ψ share a common factor related to the period of the holomorphic form on X ψ . In turn, they related the other nontrivial factors of Z(X ψ , T ) to the action of discrete scaling symmetries of the Dwork pencil F 5 on homogeneous monomials. In related work (but in a somewhat different direction), Jeng-Daw Yu [Yu08] showed that the unique unit root for the middle-dimensional factor of the zeta function for the Dwork family in dimension n can be expressed in terms of a ratio of holomorphic solutions of a hypergeometric Picard-Fuchs equation (evaluated at certain values). The Dwork pencil F 5 is not the only highly symmetric pencil that may be used to construct the mirror to quintic threefolds. In fact, there are six different pencils of projective Calabi-Yau threefolds, each admitting a different group action, that yield such a mirror: these pencils were studied by Doran-Greene-Judes [DGJ08] at the level of Picard-Fuchs equations. Bini-van Geemen-Kelly [BvGK12] then studied the Picard-Fuchs equations for alternate pencils in all dimensions. A general mechanism for finding alternate mirrors is given by the framework of Berglund-Hübsch-Krawitz (BHK) duality. This framework identifies the mirrors of individual Calabi-Yau varieties given by invertible polynomials, or more generally of invertible pencils, the one-parameter monomial deformation of invertible polynomials; these notions are made precise in the next section. Aldi-Peruničić [AP15] have studied the arithmetic nature of invertible polynomials via D-modules. In this paper, we show that invertible pencils whose mirrors have common properties share arithmetic similarities as well. Revisiting work of Gährs [Gäh11], we find that invertible pencils whose BHK mirrors are hypersurfaces in quotients of the same weighted projective space have the same Picard-Fuchs equation associated to their holomorphic form. In turn, we show that the Picard-Fuchs equations for the pencil dictate a factor of the zeta functions of the pencil. We then show that the factor of the zeta function is bounded by the degree of the Picard-Fuchs equation and the dimension of the piece of the middle cohomology that is invariant under the action of a finite group of symmetries fixing the holomorphic form. Main theorem. An invertible polynomial is a polynomial of the form where the matrix of exponents A = (a ij ) i,j is an (n + 1) × (n + 1) matrix with nonnegative integer entries, such that: • det(A) = 0, • there exist r 0 , . . . , r n ∈ Z >0 and d ∈ Z such that n j=0 r j a ij = d (i.e., the polynomial F A is quasi-homogeneous), and • the function F A : C n+1 → C has exactly one singular point at the origin. We will be particularly interested in the case where F A is invertible and homogeneous of degree d = n + 1: then the hypersurface defined by F A = 0 defines a Calabi-Yau variety in P n . These conditions are restrictive. In fact, Kreuzer-Skarke [KS92] proved that any invertible polynomial F A (x) can be written as a sum of polynomials, each of which belongs to one of three atomic types, known as Fermat, loop, and chain: Fermats : x a , loops : x a1 1 x 2 + x a2 2 x 3 + · · · + x am−1 m−1 x m + x am m x 1 , and chains : x a1 1 x 2 + x a2 2 x 3 + · · · + x am−1 Invertible polynomials appeared as the first families exemplifying mirror symmetry [GP90,BH93]. Their arithmetic study, often in the special case of Delsarte polynomials, is of continuing interest [Shi86, EG-Z16]. Let F A be an invertible polynomial. Inspired by Berglund-Hübsch-Krawitz (BHK) mirror symmetry [BH93,Kra09], we look at the polynomial obtained from the transposed matrix A T : Then F A T is again an invertible polynomial, quasihomogeneous with (possibly different) weights q 0 , . . . , q n for which we may assume gcd(q 0 , . . . , q n ) = 1, so that F A T = 0 defines a hypersurface X A T in the weighted-projective space W P n (q 0 , . . . , q n ). We call q 0 , . . . , q n the dual weights of F A . Let d T := i q i be the sum of the dual weights. We define a one-parameter deformation of our invertible polynomial by Then X A,ψ : F A,ψ = 0 is a family of hypersurfaces in P n in the parameter ψ, which we call an invertible pencil. The Picard-Fuchs equation for the family X A,ψ is determined completely by the (n+1)-tuple of dual weights (q 0 , . . . , q n ) by work of Gährs [Gäh11, Theorem 3.6]. In particular, there is an explicit formula for the order D(q 0 , . . . , q n ) of this Picard-Fuchs equation that depends only on the dual weights: see Theorem 4.1.3 for details. We further observe that the Picard-Fuchs equation is a hypergeometric differential equation. For a smooth projective hypersurface X in P n , we have Our main result is as follows (for the notion of nondegenerate, see section 2). Theorem 1.2.3: Let X A,ψ and X B,ψ be invertible pencils of Calabi-Yau (n − 1)-folds in P n . Suppose A and B have the same dual weights (q i ) i . Then for each ψ ∈ F q such that gcd(q, (n + 1)d T ) = 1 and the fibers X A,ψ and X B,ψ are nondegenerate and smooth, the polynomials P X A,ψ (T ) and P X B,ψ (T ) have a common factor We show that the common factor R ψ (T ) is attached to the holomorphic form on X A,ψ and X B,ψ , explaining the link to the Picard-Fuchs differential equation: it is given explicitly in terms of a hypergeometric series (4.1.8). For this reason, if we had an appropriate theorem for rigidity of hypergeometric motives, we could further conclude that there exists a factor of degree precisely D(q 0 , . . . , q n ) in Q Our proof of Theorem 1.2.3 uses the p-adic cohomology theory of Dwork, as developed by Adolphson-Sperber [AS89,AS08], relating the zeta function of a member of the family to the L-function of an exponential sum. Our main theorem then follows from a result of Dwork [Dwo89] on the uniqueness of the Frobenius structure on the differential equation and the fact that the Picard-Fuchs equations for the holomorphic forms of X A,ψ and X B,ψ coincide. Theorem 1.2.3 overlaps work of Miyatani [Miy15,Theorem 3.7]. In our notation, his theorem states that if X A,ψ is an invertible pencil, q satisfies certain divisibility conditions depending on A, and ψ ∈ F × q is such that X A,ψ is smooth and ψ d T = 1, then P X A,ψ (T ) has a factor in Q[T ] that depends only on q and the dual weights (q i ) i . In particular, if A and B have the same dual weights, the zeta functions of X A,ψ and X B,ψ (for ψ satisfying these conditions) will have a common factor in Q[T ]. His factor [Miy15, (2.4), Remark 3.8(i)] divides the common factor appearing in Theorem 1.2.3. He uses finite-field versions of Gauss sums together with a combinatorial argument. To compare these two theorems, we observe that Theorem 1.2.3 provides slightly more information about the common factor and places fewer restrictions on q: for arithmetic applications, it is essential for the result that it hold without congruence conditions on q. Our techniques are different, and are ruled by the powerful governing principle that factors of the zeta function are organized by Picard-Fuchs differential equations. For example, our method could extend to pencils for which the associated differential equation may not be hypergeometric. 1.3. Implications. Theorem 1.2.3 relates the zeta functions of many interesting Calabi-Yau varieties: for example, the dual weights are the same for any degree n + 1 invertible pencil composed of Fermats and loops. For specificity, we compare the zeta functions of the Dwork pencil F n and the generalized Klein-Mukai family F 1 L n , defined by the pencil The pencil takes its name from Klein's quartic curve, whose group of orientationpreserving automorphisms is isomorphic to the simple group of order 168, and the member of the family F 1 L 3 at ψ = 0, which appears as an extremal example during Mukai's classification of finite groups of automorphisms of K3 surfaces that preserve a holomorphic form (cf. [Lev99,Muk88,OZ02]). In this setting, we give a concrete proof of Theorem 1.2.3. We also consider a collection of five invertible pencils of K3 surfaces in P 4 , including F 4 and F 1 L 3 . The other three pencils, F 2 L 2 , L 2 L 2 , and L 4 , also have only Fermats and loops as atomic types; all five are described by matrices with the same dual weights (see Table (5.1.1) for defining polynomials). Let H be the Greene-Plesser mirror family of quartics in P 3 , which is obtained by taking the fiberwise quotient of F 4 by (Z/4Z) 2 and resolving singularities. A computation described by Kadir [Kad04,Chapter 6] shows that for odd primes and ψ ∈ F q such that ψ 4 = 1 (that is, such that H ψ is smooth), This calculation combined with Theorem 1.2.3 and properties of K3 surfaces yields the following corollary, exemplifying arithmetic mirror symmetry in these cases. Then there exists r 0 ≥ 1 such that for all q = p r with r 0 | r and p = 2, 5, 7 and all ψ ∈ F q with ψ 4 = 1, we have Accordingly, we could say that the zeta functions Z(X ,ψ /F q , T ) and Z(H ψ /F q , T ) are potentially equal-i.e., they are equal after a finite extension of F q . (The explicit value of r 0 in Corollary 1.3.3 will be computed in future work [DKSSVW].) Finally, we remark on a simple relationship between the numbers of points of members of alternate mirror families over F q , reminiscent of the strong arithmetic mirror symmetry studied by Fu-Wan [FW06], Wan [Wan06], and Magyar-Whitcher [MW16]. Corollary 1.3.4: Let X A,ψ and X B,ψ be invertible pencils of Calabi-Yau (n − 1)-folds in P n such that A, B have the same dual weights. Then for all ψ ∈ F q , Corollary 1.3.4 is slightly more general than Theorem 1.2.3-there is no hypothesis on the characteristic or on the smoothness of the fiber-but it arrives at a weaker conclusion. 1.4. Plan of paper. In section 2, we introduce our cohomological setup. In section 3, we consider first the generalized Klein-Mukai family as a warmup to the main theorem, giving a detailed treatment in this case. In section 4, we prove the main result by recasting a result of Gährs [Gäh13] on Picard-Fuchs equations in hypergeometric terms, study the invariance under symmetry of the middle cohomology, and then apply Dwork cohomology. To conclude, in section 5, we specialize to the case of K3 surfaces and give some further details for several pencils of particular interest. Acknowledgements. The authors heartily thank Marco Aldi, Amanda Francis, Xenia de la Ossa, Andrija Peruničić, and Noriko Yui for many interesting discussions, as well as Alan Adolphson, Remke Kloosterman, Yang Liping, Fernando Rodriguez-Villegas, Duco van Straten, and the anonymous referee for helpful comments. They thank the American Institute of Mathematics and its SQuaRE program, the Banff International Research Station, the Clay Mathematics Institute, MATRIX in Australia, and SageMath for facilitating their work together. Doran was supported by NSERC and the Campobassi Professorship at the University of Maryland. Kelly acknowledges that this material is based upon work supported by the NSF under Award No. DMS-1401446 and the EPSRC under EP/N004922/1. Voight was supported by an NSF CAREER Award (DMS-1151047). Cohomological setup We begin in this section by setting up notation and establishing a few basic results. In the cohomology theory of Dwork, following the approach for related exponential sums as developed by Adolphson-Sperber [AS89, AS08], we will define cohomology spaces endowed with a Frobenius operator with the property that the middle-dimensional primitive factor of the zeta function is realized as the characteristic polynomial of the Frobenius operator acting on non-vanishing cohomology. We refer to the work of Adolphson-Sperber for further reference and to Sperber-Voight [SV13] for an algorithmic framing. Throughout the paper, let F q be a finite field with q elements and characteristic p, with q = p a . Let F q be an algebraic closure of F q . Nondegeneracy and convenience. Let be a nonconstant homogeneous polynomial, so that the vanishing of F (x) defines a projective hypersurface X ⊆ P n Fq . Using multi-index notation, we write Definition 2.1.1: We say F is nondegenerate (with respect to its Newton polyhedron Δ) if for all faces τ ⊆ Δ (including τ = Δ), the system of equations In this case, with F homogeneous, the definition employed by Adolphson and Sperber, that F is nondegenerate (with respect to Δ ∞ (F )) requires that the system of equations for every face τ ⊆ Δ, (including τ = Δ). Note that when the characteristic p does not divide the degree of F , the Euler relation ensures the two definitions are equivalent. Finally, we observe that if w is a new variable and we consider the form wF , then wF is nondegenerate with respect to Δ ∞ (wF ) if and only if F is nondegenerate with respect to its Newton polyhedron Δ. In the calculations below we will make use of a certain positioning of coordinates. For a subset J ⊆ {x 0 , . . . , x n } of variables, we let F ¡ J be the polynomial obtained from F by setting the variables in J equal to zero. Definition 2.1.4: We say that F is convenient with respect to a subset S ⊆ {x 0 , . . . , x n } provided that for all subsets J ⊆ S, we have 2.2. Dwork cohomology. Let G m be the multiplicative torus (so G m (F q ) = F × q ) and fix a nontrivial additive character Θ : F q → C × of F q . Denote by Tr F q r /Fq : F q r → F q the field trace. We will effectively study the important middle-dimensional factor of the zeta function by considering an appropriate exponential sum on G s m × A n+1−s and treating toric and affine variables somewhat differently. For r ∈ Z ≥1 , define where the sum runs over all n+1-tuples Consider the L-function of the exponential sum associated to F defined by is a rational function in T with coefficients in the cyclotomic field Q(ζ p ), where ζ p is a primitive pth root of unity. Theorem 2.2.1 ([AS89, Theorem 2.9, Corollary 2.19]): If F is nondegenerate and convenient with respect to S = {x s+1 , . . . , x n }, and dim Δ ∞ (F ) = n + 1, then the L-function is a polynomial in T with coefficients in Q(ζ p ) of degree given explicitly in terms This theorem also gives information about the p-adic size of the reciprocal zeros of L(T ) (−1) n+1 . We now proceed to relate the L-function of such an exponential sum to the zeta function of the corresponding hypersurface. In general, we write representing the characteristic polynomial of Frobenius acting on the primitive middle-dimensional cohomology of X. Let Y ⊆ A n+1 be the affine hypersurface defined by the vanishing of F , the cone over X. Let w be a new variable. A standard argument with character sums shows that On the other hand, one has So putting these together we have By combining Equations (2.2.2) and (2.2.5), we have Finally, splitting the domain for the variable w as In the special case where F is nondegenerate with respect to Δ ∞ (F ) and convenient with respect to {x 0 , . . . , x n }, Theorem 2.2.1 applies. Under these hypotheses, Adolphson-Sperber [AS89, Section 6] prove the following: there exists a p-adic cohomology complex Ω • such that the trace formula and finally For more details, see also Adolphson-Sperber [AS08, Corollary 6.23] and Sperber-Voight [SV13, Section 1 and pages 31-32]. In particular, the formula (2.2.10) gives a fairly direct way to compute P (T ) in the case of the Dwork family of hypersurfaces, since the defining polynomial F is convenient with respect to the full set of variables {x 0 , . . . , x n }. 2.3. Unit roots. For convenience, we conclude this section by recalling the relationship between Hodge numbers and the p-adic absolute values of the reciprocal zeros and poles of the zeta function. The following is a consequence of the Katz conjecture proved in full generality by Mazur [Maz72]. In the present context, it follows directly from Adolphson-Sperber [AS89, Theorem 3.10]. We now apply this to our invertible pencils, as defined in (1.2.1). In particular, we have X A,ψ a smooth projective hypersurface in P n defined by a polynomial F A,ψ of degree n + 1, so X A,ψ is a Calabi-Yau variety of dimension n − 1. By a standard calculation, the first Hodge number of X A,ψ is h 0,n−1 = 1. Therefore the Hodge polygon of middle-dimensional primitive cohomology starts with a segment of slope zero having length 1. By Proposition 2.3.1, there is at most one reciprocal root of the polynomial P ,ψ,q (T ) that is a p-adic unit: we call this reciprocal root when it occurs a unit root. Example 2.3.2: If n = 3 and deg F = 4, and X is smooth, then X is a quartic K3 surface, and so the Newton polygon of P ,ψ,q (T ) lies over the Hodge polygon (i.e., the Newton polygon of ( There is a polynomial defined over F p depending on A, called the Hasse invariant, with the property that H A (ψ) = 0 for a smooth fiber ψ ∈ F × q if and only if there is a unique unit root. In this case, we call X A,ψ ordinary, otherwise we say X A,ψ is supersingular. The polynomial H A is nonzero as the monomial x 0 x 1 . . . x n appears in F A,ψ [AS16, (1.9), Example 1] (in their notation, we have μ = 0). Therefore, the ordinary sublocus of P 1 {0, 1, ∞} is a nonempty Zariski open subset. This unit root has seen much study: for the Dwork family, it was investigated by Jeng-Daw Yu [Yu08], and in this generality by Adolphson-Sperber [AS16, Proposition 1.8] (see also work of Miyatani [Miy15]). These p-adic estimates can be seen explicitly in Dwork cohomology, as follows. By (2.2.10), for the hypersurface X A,ψ we are interested in the action of q −1 Frob on the cohomology group H n+2 (Ω • ). Lemma 2.3.3: The operator q −1 Frob acting on H n+2 (Ω • ) reduced modulo p has rank at most 1 and has rank exactly 1 if and only if X A,ψ is ordinary. Proof. The cohomology group H n+2 (Ω • ) has a basis of monomials By the Calabi-Yau condition, the unique monomial with |μ|/d = 1 is so (2.3.5) implies that the matrix q −1 A has at most one nonzero row modulo p, so its reduced rank is at most 1; and this rank is equal to 1 if and only if the fiber X A,ψ is ordinary, which occurs if and only if where ν = (1, 1, . . . , 1) corresponds to ω 0 . The reduced polynomialā ν,ν is thus the Hasse invariant for the given family. Generalized Klein-Mukai family As a warm-up to the main theorem, we now consider in detail the generalized Klein-Mukai family F 1 L n of Calabi-Yau n-folds. We give a proof of the existence of a common factor-realizing these as alternate mirrors, from the point of view of p-adic cohomology. Since it is of particular interest, and has rather special features, along the way we provide further explicit details about this family. 3.1. Basic properties. For n ≥ 1, let and define X ψ ⊆ P n to be the generalized Klein-Mukai family of hypersurfaces over Z defined by the vanishing of F ψ . The polynomial (3.1.1) of degree n + 1 in n + 1 variables may be described as consisting of a single Fermat term together with a single loop of length n, so we will also refer to it by the symbol F 1 L n . Throughout, let m := n n + (−1) n+1 . Note (n + 1) | m. Let k be a field and ζ ∈ k a primitive mth root of unity. The subgroup acting trivially on X ψ is cyclic of order n + 1, and the quotient acting faithfully on X ψ is generated by z n+1 . Proof. This statement follows from a direct computation. Lemma 3.1.3: Suppose p m. Then for all ψ ∈ F q such that ψ n+1 = 1, the hypersurface defined by F ψ (x) is smooth, nondegenerate, and convenient with respect to {x n }. Proof. The statement on convenience is immediate. We begin with the full face Δ, where nondegeneracy (using the Euler relation) is equivalent to smoothness. We compute for i = 0, . . . , n − 1 that with indices taken modulo n, and Setting these partials to zero and subtracting (3.1.5) from (3.1.4), we obtain the n × (n + 1)-matrix equation The absolute value of the determinant of the left n × n block of the matrix in (3.1.6) is m = n n + (−1) n+1 , so by our assumption on p the full matrix has rank n over F q . By homogeneity, the vector (1, . . . , 1) t therefore generates the kernel of the full matrix; the solution vector lies in this kernel, so we conclude , by scaling we may assume x n = 1. Thus x n i−1 x i = 1 for i = 1, . . . , n − 1; taking the product of these gives (x 0 · · · x n−1 ) n+1 = 1. Since ψx 0 · · · x n = 1 as well, we conclude ψ n+1 = 1; and these are precisely the excluded values. Now suppose that τ Δ is a proper face of Δ. Then clearly (1, 1, . . . , 1) does not belong to τ . If τ contains (0, . . . , 0, n + 1), then by restricting (3.1.5) to τ , we see that a zero of To overcome the fact that the generalized Klein-Mukai pencil is only convenient with respect to {x n } (as opposed to the case of the Dwork pencil, which is convenient with respect to the full set of variables {x 0 , . . . , x n }), we prove the following lemma. Lemma 3.1.7: We have The point of this combinatorial lemma is that one obtains the same value of the exponential sum when changing affine coordinates to toric coordinates, so that Theorem 2.2.1 applies. for the linear subspace defined by the vanishing of Let r ∈ Z ≥0 . A standard inclusion-exclusion argument gives We claim, in fact, that every summand on the right-hand side of (3.1. To this end, suppose that J = ∅; then at least one coordinate is sent to zero in F ¡ J (x), and the deforming monomial x 0 · · · x n is set to zero. with t = #J c . We then compute that Summing the innermost sum on the right side of (3.1.10) over x j ∈ F q r counts with multiplicity q r the number of zeros of wx n j−1 with w ∈ F × q r , where x j−1 ∈ F q r is fixed. If x j−1 = 0, then there are no such zeros and the inner sum is zero. Therefore, letting J = J ∪ {j − 1, j} (with indices taken modulo n), Let S be the set of variables of F appearing in the Fermat (diagonal form) piece of the defining polynomial F in either case. Then F is convenient with respect to S. Suppose ψ ∈ F q is such that F ψ (x) is nondegenerate with respect to Δ ∞ (F ). Therefore, we have a p-adic complex Ω • such that (2.2.8)-(2.2.10) hold. We prove that for each fiber, the zeta functions in these two families have middle-dimensional cohomology with a common factor of degree n determined by action of the Frobenius on the (∂/∂ψ)-stable subspace containing the unique holomorphic nonvanishing differential n-form. In both cases, the monomial wx 0 x 1 · · · x n ∈ Ω n+2 corresponds to this n-form. For q = p r , let Q q be the unramified extension of Q p of degree r. Proposition 3.2.2: If p (n + 1)d T and ψ ∈ F × q is a smooth, nondegenerate fiber, then the polynomials P ,ψ (T ) where ∈ {F n+1 , F 1 L n } have a common factor R ψ (T ) ∈ Q q [T ] of degree n. Proof. Viewed over a ring with derivation ∂/∂ψ, for all i the cohomology H i (Ω • ) has an action by the connection where γ 0 is an appropriate p-adic constant. The monomial wx 0 x 1 · · · x n then spans an (∂/∂ψ)-stable subspace of H n+2 (Ω • ), denoted Σ . In both cases ∈ {F n+1 , F 1 L n }, we have a Frobenius map Frob • acting as a chain map on the complex Ω • and stable on Σ . As a consequence, we conclude that Let Φ (ψ) represent the Frobenius map Frob restricted to Σ . We appeal to work of Dwork [Dwo69]. We find that in the sense of Dwork, there are two Frobenius structures, both of which are strong Frobenius structures as a function of the parameter ψ on the hypergeometric differential equation, corresponding to the two values of . The hypergeometric differential equation (over C p , or any field of characteristic zero) is irreducible because none of the numerator parameters {1/(n + 1), . . . , n/(n + 1)} differs from the denominator parameter {1} by an integer [Beu08, Corollary 1.2.2]. As a consequence, the hypotheses of a lemma of Dwork [Dwo89, Lemma, pp. 89-90] are satisfied, and we have that the two Frobenius structures agree up to a multiplicative constant c ∈ C × p ; in terms of matrices, We now show that c = 1. Let ψ 0 ∈ F q be such that ψ n+1 0 = 1. Then the fiber for each family at ψ = ψ 0 satisfies F ,ψ0 (x) = 0 and the defining polynomial wF ,ψ0 (x) is nondegenerate. Let ψ 0 be the Teichmüller lift of ψ 0 . We recall section 2.3. Suppose that ψ 0 is an ordinary fiber for both families. Then Without loss of generality, we may assume that c is a p-adic integer. Since the two families have the same Picard-Fuchs differential equation, we obtain p-adic analytic formulas for the unique unit root of Tr(Φ Fn+1 ( ψ 0 )) by Jeng-Daw Yu [Yu08], and for the unique unit root of Tr(Φ F1Ln ( ψ 0 )) by work of Adolphson-Sperber [AS16] (also proven by Miyatani [Miy15]). These formulas are given in terms of the unique holomorphic solution of Picard-Fuchs (at ∞) so that the formulas are the same, so the unique unit roots for the two families agree, and this forces c ≡ 1 (mod q). Repeating this argument over all extensions F q r with r ≥ 1, we conclude similarly that c r ≡ 1 (mod q r ). Taking r coprime to p, by binomial expansion we conclude c = 1 as desired. In the next section, we generalize this result and also prove that R ψ (T ) ∈ Q[T ]. Proof of the main result We now prove the main theorem in the general setting of families of alternate mirrors. 4.1. Hypergeometric Picard-Fuchs equations. To begin, we study the Picard-Fuchs equation for the holomorphic form of an invertible pencil. We use the structure of the Picard-Fuchs equation to identify a factor of the zeta function associated to the holomorphic form, establishing a version of our main theorem with coefficients defined over a number field. By work of Gährs [Gäh13,Gäh11], we know that if two invertible pencils have the same dual weights, then their Picard-Fuchs equations are the same. We now state her result and recast it in a hypergeometric setting. Let F A be an invertible polynomial, where q i are its dual weights and d T := i q i is the weighted degree of the transposed polynomial F A T . For each ψ, let H n (X A,ψ ) be the de Rham cohomology of the holomorphic nforms on the complement P n \ X A,ψ , and write the usual holomorphic form on P n as Ω 0 = n i=0 (−1) i x i dx 0 ∧ · · · ∧ dx i−1 ∧ dx i+1 ∧ · · · ∧ dx n . Then one may use the Griffiths residue map Res : H n (X A,ψ ) → H n−1 (X A,ψ , C), whose image is primitive cohomology, to realize the holomorphic form on X A,ψ as Res(Ω 0 /F A,ψ ). Systematically taking derivatives of the holomorphic form establishes the Picard-Fuchs differential equation associated to the holomorphic form that Gährs computes via a combinatorial formulation of the Griffiths-Dwork technique. We now state her result. We first define the rational numbers The elements of the multiset α α α have no repetition, so we can think of α α α as a set. Take the intersection I = α α α ∩ β β β. Note that all of these sets depend only on the dual weights q i . Let δ = ψ d dψ . As β i0 = 0 ∈ β β β for all i, we have 0 ∈ β β β I, hence the Picard-Fuchs equation is a hypergeometric differential equation. In particular, a solution is given by the (generalized) hypergeometric function where D = D(q 0 , . . . , q n ) and I ∪ ∪ ∪ {0} is the multiset obtained by adjoining 0 to I. Proof. This differential equation has parameters such that α i − β jk ∈ Z for all i, j, k, for the following reason: the elements of α α α and β β β are already in [0, 1), so two differ by an integer if and only if they are equal; and whenever two coincide, they are taken away by the set I (noting the elements of α α α are distinct). Therefore, the differential equation is irreducible [Beu08, Corollary 1.2.2]. Group invariance. In this section, we show that the subspace of cohomology associated to the Picard-Fuchs equation for the holomorphic form is contained in the subspace fixed by the action of a finite group. This group arises naturally in the context of Berglund-Hübsch-Krawitz mirror symmetry. Throughout, we work over C. We begin by establishing three groups that are useful when studying invertible potentials and prove a result about the invariant pieces of cohomology associated to them. Let F A be an invertible polynomial. First, consider the elements of the maximal torus G n+1 m acting diagonally on P n and leaving the polynomial F A invariant: Write A −1 = (b ij ) i,j ∈ GL n+1 (Q) and for j = 0, . . . , n let ρ j = (exp(2πib 0j ), . . . , exp(2πib nj )); then ρ 0 , . . . , ρ n generate Aut(F A ). Next, we consider the subgroup acting invariantly on the holomorphic form, and the subgroup (4.2.3) J FA := ρ 0 · · · ρ n obtained as the cyclic subgroup of Aut(F A ) generated by the product of the generators ρ j . Then J FA is the subgroup of Aut(F A ) that acts trivially on X A . We now describe Berglund-Hübsch-Krawitz mirrors explicitly. Consider a group G such that J FA ⊆ G ⊆ SL(F A ). Then we have a Calabi-Yau orbifold Z A,G := X A /(G/J FA ). The mirror is given by looking at the polynomial F A T obtained from the transposed matrix A T and the hypersurface X A T ⊂ W P n (q 0 , . . . , q n ), where q i are the dual weights. We define the dual group to G to be Thus, we obtain a Calabi-Yau orbifold Berglund-Hübsch-Krawitz duality states that Z A,G and Z A T ,G T are mirrors. Proposition 4.2.5: Let X A,ψ be an invertible pencil of Calabi-Yau (n − 1)folds determined by the integer matrix A. Then for all ψ such that X A,ψ , we have In certain cases, we have equality. We can compute dim C H n−1 prim (X A , C) SL(FA) in the following way. Let be the defining polynomial for the Fermat hypersurface X A ⊆ P n . Here, SL(F A ) = (Z/(n + 1)Z) n where an element (ξ 1 , . . . , ξ n ) ∈ SL(F A ) acts by Note that in order for i x ai i ∈ Q FA to be SL(F A )-invariant, it must satisfy the equalities a 0 − a i ≡ 0 (mod n + 1) for all i. Thus the only SL(F A )-invariant elements of the Milnor ring are the n elements (x 0 · · · x n ) a for 0 ≤ a < n. Note that n = d T − #I, or the order of the Picard-Fuchs equation for this example, so equality holds in (4.2.6). 4.3. Frobenius structure for the subspace associated to the holomorphic form. In this section, we will study a subspace W ψ ⊂ H n+2 (Ω • ,ψ ) generated by the connection acting on the holomorphic form. The dimension of this subspace is equal to the order of the Picard-Fuchs equation. It will in the end correspond to a factor of the zeta function for X A,ψ0 , where X A,ψ0 is a nondegenerate and smooth member of the pencil. We prove in this section that there is a Frobenius structure on W ψ by examination of the unit root. Note that in our case, working with a cyclic basis C 0 (ψ) is a solution of the scalar differential equation Proof. This proposition is proven by Sabbah [Sab05, Theorem 2.4]; for completeness, we provide an argument here. Suppose M * 0 were such a nonzero, proper ∇ * -stable differential submodule of dimension r, 0 < r < N. We will show that if such a proper submodule existed, then the Picard-Fuchs operator L(D) has a proper factorization in the noncommutative polynomial ring K [D] and the Picard-Fuchs equation would necessarily be reducible, contradicting Proposition 4.1.11. Without loss of generality we may assume M * 0 has a cyclic basis {γ 0 , ∇γ 0 , . . . , ∇ r−1 γ 0 , δ r , . . . , δ N −1 } is the dual basis for W ψ . Then we can write the connection matrix for W * ψ in the form We consider a horizontal section So the entries {B 0 , . . . , B r−1 } are dependent over K. We now can rewrite this horizontal section in terms of our original dual basis for some A i ∈ K. Note that A 0 must be a solution of the Picard-Fuchs differential equation. There exists some nonsingular matrix A over K so that Using this change of basis, we can see that . . . , B N −1 )A = (A 0 , . . . , A N −1 ) where This gives a non-trivial homogeneous relation among A 0 , . . . , D N −1 A 0 ; thus, A 0 satisfies a lower order differential equation defined over K. Using the usual argument via the division algorithm in the noncommutative ring K [D] we conclude that the Picard-Fuchs operator has a non-trivial right factor in K[D] which contradicts the irreducibility of the Picard-Fuchs equation. Lemma 4.3.2: Let ψ ∈ P 1 be such that X A,ψ is nondegenerate and smooth. Then there exists a strong Frobenius structure on W * ψ . Proof. We recall section 2.3. Suppose that X ψ is ordinary, a condition that holds for all but finitely many ψ ∈ F p . Then there is a unique unit root of the characteristic polynomial of Frobenius acting on H n+2 (Ω • ,ψ ), and this yields a unique unit root eigenvector η 0 up to scaling. The same holds for the dual space ,ψ ) with unique unit eigenvector η * 0 . We claim that η * 0 ∈ W * ψ . Assume for the purposes of contradiction that η * 0 ∈ W * ψ . Then we may take as a basis for K ψ a set containing η * 0 and the cyclic basis . Let A * be the matrix of q −1 -Frobenius in this basis. Since η * 0 is a unit eigenvector, the diagonal coefficient of A * corresponding to η * 0 is nonzero modulo p (and the other coefficients of this column are zero). But by (2.3.7), the diagonal coefficient of A * for ω * 0 is nonzero modulo p because X ψ is ordinary. Therefore A * has rank at least 2 modulo p, and this contradicts Lemma 2.3.3. So now let η * 0 ∈ W * ψ be the unit root eigenvector, unique up to scaling and defined on the ordinary locus U ⊆ P 1 where U is the complement of the union of {0, 1, ∞} and the supersingular locus for the given pencil X A,ψ . Then writing Frob for q −1 -Frobenius where u ∈ K is a unit on the locus U . Frobenius commutes with the connection ∇ * , so which implies that Frobenius is stable on the submodule that is generated by the cyclic basis given by {(∇ * ) i η * 0 | i ∈ Z ≥0 }, but this is W * ψ by Proposition 4.3.1. Hence, for each choice of pencil indexed by the Picard-Fuchs equation has a strong Frobenius structure in the sense of Dwork [Dwo89]. 4.4. Proof of main result. In this section, we prove our main result. We will make use of the following lemma. Lemma 4.4.1: Let X be a projective variety over F q and let G be a finite group of automorphisms of X = X × Fq F q stable under Gal(F q /F q ). Then the following statements hold: (a) The quotient X/G exists as a projective variety over F q . (b) Let = p be prime and suppose gcd(#G, ) = 1. Then for all i, the natural map Proof. See Harder-Narasimhan [HN75, Proposition 3.2.1] (with some extra descent). Our main result (slightly stronger than Theorem 1.2.3) is as follows: Theorem 4.4.2: Let X A,ψ and X B,ψ be invertible pencils of Calabi-Yau (n − 1)-folds in P n . Suppose A and B have the same dual weights (q i ) i . Then for each ψ ∈ F q such that gcd(q, (n + 1)d T ) = 1 and the fibers X A,ψ and X B,ψ are nondegenerate and smooth, there exists a polynomial R ψ (T ) ∈ Q[T ] with Proof. Let F ,ψ (x) be invertible pencils, corresponding to matrices = A, B with the same weights. Then by Theorem 4.1.3, the Picard-Fuchs equations of order D(q 1 , . . . , q n ) are the same. Suppose that the two pencils have a common smooth fiber ψ ∈ F q . We follow the construction of cohomology in Adolphson-Sperber [AS08], with a few minor modifications. We assume their base field Λ 1 is enlarged to treat ψ as a variable over Q p (ζ p ) with (unit) p-adic absolute value, so that Λ 1 has ∂/∂ψ as a nontrivial derivation. Then the construction of the complex Ω • ψ is unchanged as are the cohomology spaces H i (Ω • ψ ). Then [AS08, Theorem 6.4, Corollary 6.5] where ψ 0 is the Teichmüller lift of ψ 0 . The connection . By work of Katz [Kat68], the associated differential equation is the Picard-Fuchs equation. For each invertible pencil determined by a choice of , as in section 4.3, we have a subspace W ψ obtained by repeatedly applying the connection to the monomial wx 0 x 1 · · · x n corresponding to the holomorphic form. By Lemma 4.3.2, we obtain a strong Frobenius structure on this differential module. By construction, the associated differential equation is the hypergeometric Picard-Fuchs equation, and this equation is independent of by Theorem 4.1.3. By Proposition 4.1.11, this differential equation is irreducible. Under the hypothesis that p (n + 1)d T , there is a p-integral solution to this differential equation. Then by a result of Dwork [Dwo89, Lemma, pp. 89-90], the respective Frobenius matrices Φ ,ψ0 acting on W differ by a p-adic constant. As in the proof of Proposition 3.2.2, the same unique unit root at a smooth specialization implies that this constant is 1. At the same time, the subspace is stable under the connection and has an action of Frobenius. The group SL(F A ) preserves the holomorphic form, so W ψ ⊆ Σ ,ψ . Let We have shown that , as it is a factor of the zeta function, we know immediately that R ψ0 (T ) ∈ K[T ] for K a number field, which we may assume is Galois over Q by enlarging. Next, we apply Lemma 4.4.1: the characteristic polynomial of Frobenius via the Galois action on H n−1 et (X A,ψ0 , Q ) SL(FA) is equal to S ,ψ0 (qT ). Therefore S ,ψ0 (qT ) ∈ Q [T ] for all but finitely many , and so is independent of and it also belongs to Q[T ]. Now let be the least common multiple of the polynomials obtained by applying Gal(K/Q) to the coefficients of R ψ0 . Then R ψ0 (T ) is still independent of , by Galois theory R ψ0 (T ) ∈ Q[T ], and R ψ0 (T ) | S ,ψ0 (T ) | P ,ψ0 (T ) is a factor of the zeta function and Remark 4.4.5: It is also possible to argue for a descent to Q[T ] of a common factor of degree d T − #I purely in terms of hypergeometric motives-without involving the group action-as follows. First, we need to ensure that the trace of Frobenius on the subspace of p-adic cohomology cut out by the hypergeometric Picard-Fuchs equation is given by an appropriately normalized finite field hypergeometric sum: this is implicit in work of Katz [Kat90,§8.2] and should be implied by rigidity [Kat90,§8.10], but we could not find a theorem that would allow us to conclude this purely in terms of the differential equation. Remark 4.4.8: There is yet a third way to observe a common factor purely in terms of group invariance using a common cover by a Fermat pencil (of larger degree): see recent work of Kloosterman [Kl17]. 4.5. Unit roots and point counts. If X is a smooth Calabi-Yau variety, the polynomial P X (T ) appearing in the zeta function of X has at most one root that is a p-adic unit. This root is called the unit root. We have already used the unit root implicitly to compare zeta functions. We may also use the unit root directly to extract arithmetic information about an invertible pencil from A T . This yields a simple arithmetic relationship between different invertible pencils with the same dual weights. Remark 4.5.2: In the case of non-smooth, non-supersingular fibers, Adolphson-Sperber [AS16] describe what is meant here by the unit root and show that then the unit root is given by the same formula as in the smooth case. Dwork noted the possibility of a meaningful unit root formula for varieties that are not smooth [Dwo62]. Proof. In the case where p divides d T we replace d T ψ in the given families by ψ in order to obtain a nontrivial pencil. Adolphson-Sperber [AS16] provide a formula for the unit root using A-hypergeometric functions. The lattice of relations used to compute the A-hypergeometric functions is determined by the dual weights, and the character vector is the same in both families. Thus, the unit root formula is the same in both cases. More precisely, in the case of smooth fibers, the middle dimensional factor has a unique unit root which occurs in the common factor R ψ (T ) described above. It is given by a p-adic analytic formula in terms of the series defined above. The Hasse invariant is determined by the reduction of the A-hypergeometric series solution mod p. This proves the identity of the supersingular locus in cases where the weights agree. Remark 4.5.3: In the case that ψ ∈ F × q yields a smooth member of the pencil X A,ψ , the result of Proposition 4.5.1 can also be obtained from Miyatani [Miy15, Theorem 2.9], where the unit root is nontrivial precisely when a formal power series defined using the hypergeometric parameters appearing in Equation 4.1.8 is nonzero. Miyatani also gives a formula for the unit root when it exists and X A,ψ is smooth, in terms of the same hypergeometric power series. As we have already observed, the hypergeometric parameters depend only on the weights of A T or B T . Proposition 4.5.1 implies a relationship between point counts for alternate mirrors, reminiscent of Wan's strong arithmetic mirror symmetry [FW06], [Wan06]. Corollary 4.5.4: Let F A (x) and F B (x) be invertible polynomials in n + 1 variables satisfying the Calabi-Yau condition. Suppose A T and B T have the same weights. Then for any fixed ψ ∈ F q and in all characteristics (including p | d T ) the F q -rational point counts for fibers X A,ψ and X B,ψ are congruent as follows: Proof. The formula is true vacuously when the fiber is supersingular (there is no unit root). Otherwise, the unit root controls the point count modulo q. The congruence result given here is weaker of course for smooth fibers than the result given earlier on common factors, Theorem 4.4.2 above. It is possible that the common factor result for the piece of middle-dimensional cohomology invariant under the respective group actions does extend meaningfully to fibers that are not smooth as well. Computations in [Kad04,Kad06,CDRV01] show that a factor of the zeta function associated to the holomorphic form can be identified for singular fibers of the Dwork pencils of quartics and quintics, as well as for a certain family of octic Calabi-Yau threefolds in a weighted projective space. We expect there will be a common factor (for families with the same dual weights) for singular fibers in the case of K3 surfaces, since the unit root in this case should govern the relevant factor (using the functional equation and the fact that the determinant of Frobenius is constant). Quartic K3 surfaces We now specialize to the case of n = 3, i.e., K3 surfaces realized as a smooth quartic hypersurface in P 3 . 5.1. Pencils of K3 surfaces. The invertible pencils in P 3 whose Berglund-Hübsch-Krawitz mirrors are hypersurfaces in finite quotients of P 3 are listed in Table (5.1.1). We list the group of symplectic symmetries SL(F A )/J FA , which act nontrivially on each projective hypersurface and fix its holomorphic form, in the third column. Equation for X A,ψ Symmetries Recalling Example 4.1.9, we observe that each of these five pencils has the same degree three Picard-Fuchs equation for the holomorphic form, and that after a change of variables, this equation is the differential equation satisfied by the classical hypergeometric function The main result of this section is the following theorem. Theorem 5.1.3: Let ∈ F = {F 4 , F 2 L 2 , F 1 L 3 , L 2 L 2 , L 4 } signify one of the five K3 families in Table (5.1.1). Let q = p r be a prime power with p = 2, 5, 7 and let ψ ∈ F q be such that ψ 4 = 1. Then X ,ψ is a smooth, nondegenerate fiber of the family . Let P ,ψ,q (T ) ∈ 1 + T Z[T ] be the nontrivial factor of Z(X ,ψ /F q , T ) of degree 21. Then the following statements hold: The reciprocal roots of Q ,ψ,q (T ) are of the form q times a root of 1. (c) The polynomial R ψ,q (T ) is independent of ∈ calF . Remark 5.1.4: In future work [DKSSVW], we study these families in more detail: we describe a further factorization of Q ,ψ,q (T ) related to the action of each group, and we identify each of these additional factors as hypergeometric. The polynomials P ,ψ,q (T ) have degree 21 and all of their reciprocal roots α satisfy |α| = q, by the Weil conjectures. By a direct calculation in the computer algebra system Magma [BCP97], when p = 2, 5, 7 and ψ 4 = 1, the fiber X ,ψ is smooth and nondegenerate. Parts (a) and (c) of Theorem 5.1.3 now follow from Theorem 1.2.3 and the Picard-Fuchs differential equation computed in Example 4.1.9. We now prove Theorem 5. 1.3(b). For all ∈ F, the trace formula (2.2.10) asserts that We now analyze the unit root. In section 2.3, we saw that there is at most one unit root of P ,ψ,q (T ). If there is no unit root, then the K3 surface X ,ψ is supersingular over F q , and Theorem 5.1.3(b) follows by the Tate conjecture for K3 surfaces. Thus, we need only analyze the case where there is a unit root. Proposition 5.1.5: Suppose P ,ψ,q (T ) has a unit root u(ψ). Then the reciprocal zeros β = β of P ,ψ,q (T ) other than u(ψ) and the root q 2 /u(ψ) all have the form β = qζ where ζ is a root of unity. Proof. We know that β is an algebraic integer which, by Deligne's proof of the Riemann hypothesis, has the form β = qζ with ζ an algebraic number with complex absolute value |ζ| ∞ = 1. By the functional equation ββ = q 2 , so that for any prime = p, we have that β (and ζ) are -adic units. Since we are considering now only ordinary fibers ψ, the first slope of Newton agrees with the first slope of Hodge. It then follows for every β a reciprocal zero of P (t) other than the unit root u(ψ), we have ord q (β) ≥ 1. As a consequence, ζ is a p-adic integer. This proves ζ is an algebraic integer. From the product formula |ζ| p = 1. We have shown that |ζ| v = 1 for all places v of Q. By Dirichlet's theorem, this implies ζ is a root of unity. Before concluding this section, we consider the remaining invertible quartic pencils in P 3 . We may use methods similar to the analysis of Theorem 5.1.3 to relate two pencils of K3 surfaces whose equations incorporate chains. (5.1.6) Family Equation for X A,ψ Symmetries C 2 F 2 x 3 0 x 1 + x 4 1 + x 4 2 + x 4 3 − 12ψx 0 x 1 x 2 x 3 Z/4Z C 2 L 2 x 3 0 x 1 + x 4 1 + x 3 2 x 3 + x 3 3 x 2 − 12ψx 0 x 1 x 2 x 3 Z/2Z Let ♣ ∈ G = {C 2 F 2 , C 2 L 2 } signify one of the two K3 families in Table (5.1.6). The dual weights for these families are (4, 2, 3, 3). Let X ♣,ψ be a smooth member of ♣, and assume gcd(q, 6) = 1. Let P ♣,ψ (T ) ∈ 1 + T Z[T ] be the nontrivial factor of Z(X ♣,ψ , T ) of degree 21 as in (1.2.2). Then by Theorem 4.4.2 we have a factorization (5.1.7) P ♣,ψ (T ) = Q ♣,ψ (T )R ψ (T ) in Z[T ] with 6 ≤ deg R ψ ≤ 7 and R ψ (T ) is independent of ♣ ∈ G. However, we pin this down in the next subsection, and show in fact that deg R ψ = 6 (as expected), with deg Q ♣,ψ = 15. The reciprocal roots of Q ♣,ψ (T ) are of the form q times a root of 1 from a similar argument as in Proposition 5.1.5. Together, Theorem 5.1.3 and Equation (5.1.7) give a complete description of the implications of Theorem 1.2.3 for invertible pencils of K3 hypersurfaces in P 3 ; the remaining three pencils, classified for example by Doran-Garavuso [DG11], are each described by matrices with distinct sets of dual weights. 5.2. Discussion and applications. By Tate's conjecture, a theorem due to work of Charles [Cha13], Madapusi Pera [Per15], and Kim-Madapusi Pera [KP16], the Néron-Severi rank of a K3 surface X over F q is equal to one plus the multiplicity of q as a reciprocal root of P (T ) [vanL07, Corollary 2.3], and this rank is even. (The extra "one" corresponds to the hyperplane section, already factored in.) Thus Theorem 5.1.3(b) implies that each X ,ψ has Néron-Severi rank over the algebraic closure F q at least 18+1 = 19, so at least 20 because it is even. Similarly, each X ♣,ψ has Néron-Severi rank over F q at least 14 + 1 = 15, thus 16 because it is even. By comparison, in characteristic 0 we can inspect the Néron-Severi ranks as follows. Theorem 5.1.3 implies that the subspace in cohomology cut out by the Picard-Fuchs equation is contained in the SL(F A )-invariant subspace and it contains H 2,0 . Consequently, as observed by Kloosterman [Kl17], this implies that the SL(F A )-invariant subspace in H 2 et (X A,ψ ) contains the transcendental subspace: indeed, one definition of the transcendental lattice of a K3 surface is as the minimal primitive sub-Q-Hodge structure containing H 2,0 [Huy16, Definition 3.2.5]. For the five pencils in Table (5.1.1) with dual weights (1, 1, 1, 1), we conclude that the generic Néron-Severi rank is at least 22 − 3 = 19; but it cannot be 20, because then the family would be isotrivial, so it is equal to 19. Similarly, for the two pencils in Table (5.1.6), the generic Néron-Severi rank ρ is at least 22 − 7 = 15: but the divisor defined by for either choice of i 2 = −1 is SL(F A )-invariant, so the generic Néron-Severi rank ρ is in fact at least 16. Now a specialization result due to Charles [Cha14] shows that the rank over F q is always at least ρ and is infinitely often equal to ρ if the rank is even and infinitely often ρ + 1 if the rank is odd. By the first paragraph of this section, we conclude that the generic Néron-Severi rank of these two pencils is exactly 16. The complete Néron-Severi lattice of rank 19 for the case of the Dwork pencil F 4 is worked out via transcendental techniques by Bini-Garbagnati [BG14,§4]. It would be interesting to compute the full Néron-Severi lattices for the remaining four plus two families; Kloosterman [Kl17] has made some recent progress on this question and in particular has also shown (by a count of divisors) that the generic Néron-Severi rank is 16 for the C 2 F 2 and C 2 L 2 pencils. We conclude by a discussion of some applications of Theorem 5.1.3 in the context of mirror symmetry. Let Y ψ be the pencil of K3 surfaces mirror to quartics in P 3 obtained by taking the quotient of F 4 by (Z/4Z) 2 and resolving singularities. It can be viewed as the minimal resolution of the complete intersection [NS01,dAMS03] Z(xyz(x + y + z − 4ψw) + w 4 ) ⊆ P 4 . This calculation combined with Theorem 5.1.3 yields the following corollary. Corollary 5.2.2: There exists r 0 ≥ 1 such that for all q = p r with r 0 | r and p = 2, 5, 7 and all ψ ∈ F q with ψ 4 = 1, we have In other words, for all ψ ∈ F q with ψ 4 = 1, not only do we have the strong mirror relationship #X ,ψ (F q r ) ≡ #Y ψ (F q r ) (mod q r ) for all ∈ F and r ≥ 1 (see Wan [Wan06]), but in fact we have equality #X ,ψ (F q r ) = #Y ψ (F q r ) for all r divisible by r 0 . Accordingly, we say that the zeta functions Z(X ,ψ /F q , T ) for all ∈ F and Z(Y ψ /F q , T ) are potentially equal, that is, equal after a finite extension.
14,439.6
2016-12-29T00:00:00.000
[ "Mathematics" ]
Modeling Facilities for the Component-based Software Development Method Component-based software development (CBSD) technology uses components as first-class objects and therefore requires a good understanding of the nature of components. Industrial approaches to CBSD based on interoperability standards (such as OMG CORBA) lack of component semantics in their descriptional models. In this paper we present an overview of the SYNTHESIS method emerging the CBSD approach by introduction of semantic information to enrich and complement the industrial modeling facilities. The paper contributes to the development of modeling facilities for CBSD focusing on the interoperable systems design. Proper balance of formal and semi-formal modeling facilities is demonstrated to cope with the CBSD requirements. INTRODUCTION Component-based software development (CBSD) has become one of the hottest topics in the area of software engineering.CBSD is a promising solution intended to break up large monolithic software systems into interoperable components and thus to move us from producing handcrafted lines of code to system construction based on object-oriented software parts or components and automated processes.The latter use semantic knowledge to guide the assembly of those components into the desired target system. In this paper we consider CBSD issues in frame of the 1 Proceedings of the International Workshop on Advances in Databases and Information Systems, September, Moscow, 1996 c -Notice SYNTHESIS method 2 attempting CBSD with reuse of preexisting heterogeneous components [13,14,15].SYNTHE-SIS emphasizes megaprogramming metaphor capturing the idea of scaling-up from non-distributed object-oriented systems to large systems of heterogeneous, distributed software components.We consider interoperability to be the universal paradigm for compositional software development in the range of the systems mentioned.Basically, interoperability implies a composition of behaviors.Correct compositions of software components should be semantically interoperable in the context of a specific application. The overall goal of the SYNTHESIS project is to provide a uniform collection of modeling facilities suitable for different phases of the forward engineering activities as well as for the reusable information resource specification in the reverse engineering phases.The method considered in the paper focuses on the semantic interoperation reasoning process [15] that should lead to the concretization of the specification of requirements by views over the pre-existing information resources. The paper is structured as follows.After a short summary of the state-of-the-art and the related R&D directions we identify basic CBSD issues remaining to be open.Presenting the SYNTHESIS method, we discuss in more details its original features intended for the steps of the process of the semantic interoperation reasoning.Further we concentrate on the SYNTHESIS modeling facilities intentionally separated into semi-formal part used for plausible reasoning in course of CBSD and formal part used for strict justification of the design and specification solutions.A comprehensive example showing how different modeling facilities interact is presented. STATE-OF-THE-ART The software factory idea [19] as well as a research on software reuse has given important impact on CBSD.The importance of the reuse topic has led to a series of interna-tional reuse conferences, and to government funded research projects like REBOOOT [16].But besides basic research some companies like IBM or Hewlett Packard have set up programs introducing CBSD [11].Very important for CBSD are also industrial activities in the field of interoperability standards like OLE2 or CORBA [20,22]. When talking about CBSD today we cannot ignore the growing influence of the Internet technology, especially the WWW.First of all, the Web is an excellent resource of available components in the Componentware area, ie., in the Microsoft OLE/COM environment (e.g.OCX components) as well as in the OpenDoc/CORBA area (e.g., OpenDocparts).Second, the WWW and the OMG CORBA are complementing each other and we can watch their merge now.Sun's object-oriented Internet development language Java [6] has the potential to speed up this process and -maybeto revolutionize the Web.Before the coming of Java, Web technologies gave users a very crude, i.e. static, way to access the power of the Internet.Building component-based client/server, multi-user applications was almost impossible with pre-Java Web technology.Protocols such as HTTP focus on interaction with the user rather than on application interaction (http users can only really pull pages of text and graphics back and forth across the net) and so impose fundamental limitations on the nature of services accessible from a consumer application running on an Internet device or a home PC.Java increases the value of the Internet by bringing "live" applications into the picture and CORBA 2.0 ORB implementations like IONA's Orbix or Post Modern Computing's BlackWidow claim to bring this even one step further offering the ability to perform semantically rich client/server operations on the Web [7].SunSoft is also working on a Java/CORBA connection, so that Java programs will be able to invoke remote methods in server based objects over the net. The combination of ORBs and Java takes CORBA beyond the enterprise and into the global sphere.ORBs and Java together enable much more than simple Internet applications -they provide a truly portable platform for building and deploying large-scale, distributed client/server applications across both public and private networks. Besides being an object-oriented, multi-threaded, and secure language, Java offers two interesting features to CBSD: First, Java is a cross-plattform language because Java programs are "architectural neutral bytecodes".Second, Java allows small programs or applets (mini-applications) to be embedded within an HTML document.When the user clicks on the appropriate part of the HTML page, the applet is downloaded into the client workstation or PC environment, where it begins executing. CORBA distributed object technology empowers the Java applet with standards-based connectivity to the world of information and computing services.Introducing CORBA to the Java environment means that applets are no longer restricted to simple interaction with the user, but are instead capable of taking part in complex interactions with backend services.With CORBA, Java applets transcend the limitations of simple Web browser technology -CORBA-compliant Java objects become the basis for the provision of Internet and interactive Multimedia Services on a world-scale.The Internet or enterprise-wide intranets build a kind of standardized sockets into which application components can be plugged in.According to IONA's vision [7] distributed applications could then be viewed as collections of "worldobjects" -some may be downloadable to consumer devices, others may reside on backend corporate servers -all should be capable of sharing information with one another.Thus, a combination of the Java programming language with the CORBA standard for application integration offers an ideal solution for downloadable application components capable of accessing multiple, shared backend services located across the Internet.CORBA 2.0 provides the crucial missing link between the Java application (applet) running on a consumer device and the required backend service.Both CORBA and Java essentially seek to abstract the underlying hardware technologies and architectures.For component-based software development this factor brings about a reduction in learning curves and offers improvements in time-to-market as well as maintenance cost reduction. As far as the SYNTHESIS environment is concerned, we plan to include the Internet and WWW facilities into the general architecture.OMG's CORBA 2.0 encapsulates the underlying information resources (components) and the architectures like IRO-DB [8] make it possible to represent heterogeneous databases with resource specifications in frame of the ODMG'93 standard.In this context we define three different kinds of the Internet sites to support the SYNTHE-SIS design method: the information resource provider sites, the designer sites and the application domain provider sites.Furthermore we distinguish between different design scenarious (centralized at the design site, cooperative that involves resource providers into the design, and an active resource scenario when information resources actively participate in the design process offering their own reuse possibilities). OPEN ISSUES Up to now we have no methods, guidelines or design heuristics on how to develop good frameworks.Possible steps in this direction are the metapattern/hot spots approach by W. Pree [23].Framework adaptation, i.e. the customization to the user's needs, is also a field that needs more research.Active cookbooks [17] are an approach to support the user in this problem area.The technique of Design Patterns [10] could be helpful to document (part of) a framework. On the other hand, the ComponentWare approach provides no application skeleton but individual components that can build the application when assembled and interconnected by a software bus (like CORBA's ORB).But the problem of components is that they do not have sufficient clean semantic specifications to rely on for their reuse.In this context the following issues are considered to be still open: • Complete specifications (for machine and for human) of the available components and of the application requirements are necessary prerequisite for the method • Homogeneous ("canonical") equivalent specifications for pre-existing components should be provided • One and the same set of description facilities should be used for different layers of development (requirement specification, design and reverse engineering) • Sound foundations are necessary to support provable requirement concretization and coherent component composition • The design methodology should support design based on reuse and interoperable composition of components • Componentware and framework approaches integration is desirable. The SYNTHESIS method overviewed in the following sections addresses many of these open issues. RELATED WORK In the context of CBSD the use of formal methods and domain knowledge is quite new but of growing importance.One significant activity in this direction is the U.S. Advanced Technology Program (ATP) Component-Based Software, sponsored by the National Institute of Standards and Technology NIST [5]. Scalable Automated Semantic-Based Software Composition is another project funded by NIST, the state of California and a consortium of companies.The project focus is on semantic -based composition and component synthesis based on specifications. Composable Software Systems is a research project led by three members of the School of Computer Science, Carnegie Mellon University, Pittsburgh, PA [25].The project tries to develop a scientific and engineering sound foundation for designing, building, and analyzing composable systems, organized as collections of reusable components. SYNTHESIS MODELING FACILI-TIES A strategy for incorporation formal design method.SYN-THESIS modeling facilities should provide for semantic interoperation and reuse of the pre-existing resources to cope with the open CBSD issues identified above.SYNTHESIS modeling facilities roughly can be subdivided into semi-formal and formal ones.The former (the SYNTHESIS language [13]) is intended as a mediator between the informal natural language specifications and the formal ones.We focus on model-based specifications for the latter [1]. For incorporation of formal specification method we exploit a transitional computer-assisted strategy [9].These strategies have advantage of computer assistance available to move back and force between semi-formal and formal specifications. Semi-formal facilities of SYNTHESIS.Uniformity of the SYNTHESIS object model is based on the algebraic framework [18].The fundamental concept of the SYNTHESIS object model [13] is an abstract value.Abstract values are instances of abstract data types (ADT) that resemble algebraic systems [18].A SYNTHESIS object model is purely behavioral. Type in the language is treated as the first-class value.Type variables have types as their values.Basic operations used in type expressions (mostly while implementing object calculus formulae) are operations of type composition (type meet and join) and of type product.Type specifications are abstract and completely separated of their implementations. All operations over typed data in the SYNTHESIS language are represented by functions.Predicative specifications of functions are expressed by formulae of the SYN-THESIS object calculus. Incorporating a sound foundation we focus on modelbased specifications [4] chosen among other specification formalisms such as logic-based, functional and algebraic.The notion of execution of a model-based specification consists of the proof of the initial consistency of the model and the preservation of the invariants by the operations. The model-theoretic methods [3,24,1] are based on pure mathematical abstraction of the specification of requirements and on the application of the provable stepwise refinement (including data and algorithmic refinement) in process of their development.During the refinement process, "offthe-shelf" components can be taken into account for their reuse.In SYNTHESIS we focus on the Abstract Machine Notation (AMN) [1] applying transitional computer-assisted strategy. To succeed with the strategy, we are based on formal interpretation of the SYNTHESIS language features in AMN.We interprete each type of the SYNTHESIS specification by a separate abstract machine. SYNTHESIS CBSD METHOD OVERVIEW The method emphasizes integration, reuse, adaptation and reconstruction of the pre-existing components (the whole or the pieces of the existing components, legacy systems, databases, program packages, data files, multimedia data) for the new (or modified) system requirements.SYNTHE-SIS method is not considered as one rigid approach, but as top-down, bottom-up iterative processes of analysis, design and development.The interdependence of different phases of the SYNTHESIS method is shown on the Fig 1. The conventional technique of the OO analysis and design is used for the requirement planning and domain analysis phases.This technique is augmented with the ontological specifications needed to resolve contextual differences with the pre-existing resources, with the specification of the result in the common declarative OO and logic based model that is two dimensionally uniform and with a possibility to justify the result using formal specification and proof facilities. The information resource description technique is developed to complement the existing core interoperation technology (such as CORBA IDL) in order that the resources could be reused in the semantically interoperable environment.The specification of the resource should be complete for the semantic interoperation reasoning. The design technique is based on the interoperable reuse of the pre-existing resources.For that the coherence of the contexts of the problem domains and of the resources is negotiated, the search of the relevant resource specifications is supported, the discrepancy reconciliation approaches, concretization view construction technique is provided.The results of the design can be provably checked. The SYNTHESIS method is neutral to the possible methods providing the object-oriented requirement planning and analysis models as well as to the possible reverse engineering methods.The output of such methods should be transformed to the SYNTHESIS canonical model thus giving precise semantics to the diagrammatic notation. ROLE OF FORMAL MODELING FACILITIES Justification of the decisions taken during the Forward phase. Formal checking of the result of the analysis phase is provided as follows.The type / class definitions of SYNTHESIS model are presented as abstract machines in the B AMN [1] notation to verify consistency of the resulting specification (in particular, to check that the methods defined preserve the invariants given by the assertions, ontological rules and other constraints of the model; to check consistency of type / subtype specifications: these specifications should have a model). The design model is the refinement of the domain analysis model adapting it to the actual heterogeneous interoperable information resource environment.The formal counterpart of this concretization specification is given in AMN: the concretization of each application type is verified by its transformation into the B AMN and treating concretization as a refinement in B. Proof obligations corresponding to the refinement of an abstract machine are generated and proved. Type specification mapping technique for the reverse engineering phase.For the heterogeneous world of information resources we provide a technique of mapping of the preexisting resource type specifications into the canonical specifications uniformly defined in the SYNTHESIS language.The commutative type model mapping is developed through the following basic steps: • construct the mapping of a source type specifications into type specifications of the canonical type model (including state and behavior mapping); • provide an interpretation of source type model in the abstract machine notation; • provide an interpretation in abstract machine notation of the types resulted in mapping of the source type model into the canonical types; • justify the state-based and behavioral properties of the type mappings proving that a source type is a refinement of its mapping to the canonical type. In the reverse engineering phase the correctness of the SYNTHESIS-based specification of the resource is guaranteed by such procedure justified by the refinement relation of abstract machines.The commutative type model mapping is a specific technique providing for uniform representation in the canonical model of different type models determined by programming languages and DBMSs. AN EXAMPLE DEMONSTRAT-ING THE ROLES OF FORMAL AND SEMI-FORMAL MODELING FACILITIES 10.1 Semi-formal specifications of requirements We imagine a centralized Agency managing funds dedicated for research and development projects.We assume that the following type specifications were produced as a result of the forward analysis and design based on some OA & D method and of the transformation of its resulting definitions into the SYNTHESIS specifications.Thus we get a definition of the type Proposal and class proposal.In the SYN-THESIS language type specifications are syntactically represented by frames, their attributes -by slots of the frames.Simple concept definition rule is incorporated into the type Proposal by association with the budget attribute of a metaslot containing a definition of a rule: 'if proposal area is computer science then budget is annual and currency is roubles'.Here budget sem is a SYNTHESIS metaclass introducing metattributes (budget kind, currency) that are used in the definition of rules characterizing budget contextual semantics required by the application.budget sem in the language is treated as a class of attributes (association metaclass). Semi-formal specifications of the pre-existing resources We assume that the specifications of the pre-existing resources equivalently represent their semantics.We assume that at the Industrial Labs an information system is used that contains the following specifications of types and classes.The class project is a subset of the class submission that includes only those submissions that have been accepted. The example shows how to design the proposal class reusing preexisting resources at the Industrial Labs. Concretization view for the proposal class An example of the specification of the concretization view for the proposal class follows: {virt proposal; in: class; metaslot {comment; In specifications of a view class a function computing the class as a set is defined in a metaslot associated with an attribute in: class.We do not care here whether we declare a virtual class that exists only during its evaluation or a materialized class.The specification of the function is given below by a formula of the object calculus of the SYN-THESIS language [13] Abstract Machine Notation [1] allows to get an exact mathematic definition (specification) of the modeled entity properties (in particular it may be a computer program).Then such definition may be analysed formally.So AMN program is used for proving of the correctness of initial entity definition.In paricular it may be used for correctess proof and symbolic transforming of initial program (for example, composition, decomposition, refinement, etc.)While specifying an abstract machine we should define the machine states and operations allowing to get such states.To specify state, two kinds of entities should be defined: variables defining the state components and invariants.Invariants are laws which must be satisfied by the static states of a system. A specification of an operation describes properties and relationships that must be satisfied during a change of a state. In common model-theoretical languages such as VDM or Z such properties are specified using logical assertions that relate values of the state variables before and after operation executing.For these puposes AMN uses so called calculus of subsitutions allowing to express properties of operations in terms of predicate transformers which bind with some postcondition of the operation its weakest precondition.Generalized substitutions (which are operators of such calculus) may be considered as an abstract machine commands. After an operation being specified one should prove it preserves invariants.To do this every invariant is considered as a post -condition to which the operation (i.e., a predicate transformer) is applied.This application results in forming of a new predicate which must be proved too. So pre -and post conditions are integrated in notation which looks like a simple programming language.The commands of the language are generalized substitutions that generalize Dijkstra's guarded commands. Every generalized substitution S defines a predicate transformer binding with some post-condition R its weakest precondition [S]R that guarantees the invariance of R after an operation execution.If it is so, one says that S implements R. Kinds of the generalized substitutions are listed below. • Multiple substitution [x, ..., y := E, ..., F ]R ⇔ R , where R is R in which free occurences of x, ..., y are simualtenosly replaced by E, ..., F correspondingly where z is not a free variable in R So every generalized substitution defines a rewriting rule transforming the next predicate to a pre-condition.Preconditions describe situations (states) which are admissible for the execution of the corresponding operation.Under such conditions the operation can be completed.Guarded substitution is implemented if predicate R1 is satisfied. Bounded choice corresponds to restricted form of nondeterminism: any choice is admissible and the final desision depends on the operation developer during the concretization process.Any substitution must preserve the post-condition R. It is why the conjunction sign is used in the predicate. Elements of the Abstract Machine Notation We start with elementary notions [1] ... Operation end Abstract machine has a name and may have some formal parameters (to be either natural numbers or non-empty finite sets).An abstract machine has a number of variables that should obey a certain number of predicates forming together the invariant of the machine.The invariant allows to set-theoretically type each variable.The sets defined in the set clause of a machine constitute the basis of its type system.An abstract machine also has an initialization that is a substitution.Finally, an abstract machine has a number of operations defined with the following syntax: V ariable ←− Identif ier[(V ariable)] ∼ = Substitution Identif ier[(V ariable)] ∼ = Substitution Once an abstract machine has been written, one has to check that a certain number of conditions are met.Such conditions together form the proof obligations of the machine that are shown below in a simplified form for the small machine skeleton that follows.machine Operation name ∼ = pre L then S end; end Proof obligations templates: The first proof obligation is just the existence proof obligation for variables.The second one concerns the establishment of the invariant by the initialization.The last one concerns the preservation of the invariant by each operation. Basic modularization features of AMN includes and promotes clauses are introduced to make a machine that is based on the composition of other machines.includes gives names of included machines with renaming (dotted identification that required in the case of multiple inclusion) and with actual parameters.Natural number parameters are instantiated by expressions denoting natural numbers and the set parameters are instantiated by means of expressions denoting simple sets. A machine including other machines can have its own glueing invariants that should be preserved by promoted and new operations of the new machine.New machine may have its own variables with the corresponding initialization. The promotes clause contains the list of operation names of those operations of the included machines that are to become without any modifications genuine operations of the new machine. The uses clause is different from includes.Operations of used machines cannot be mentioned in the using machine.Several machines can thus use the same machine.Parameters of the used machines are not instantiated.Sets, constants and variables of the used machines can be read in the using machine. sees clause contrary to uses does not allow to mention elements of the seen machines in the invariants of seeing machine.Thus a seen machine can be refined independently of the machine that sees it. Refinement of Abstract Machines The ultimate goal of the B-technology is to have abstract machine implemented eventually as software modules by means of some programming notation.So, we have to transform abstract machines so that they could eventually be implemented by means of the programming notation.This will be done by a step by step restriction of the constructs that could be used further.This activity is called a refinement. Algorithmic refinement consists in removing of nondeterminism by being more and more precise about the way our operations are to be eventually made concrete.At the same time we should relax preconditions. Data refinement consists in removing completely all variables whose types are too complicated to be implemented as such and in replacing them by simpler variables whose types correspond to those found in programming notations: that is, essentially, natural numbers taken in certain intervals (scalar types) and functions from scalar types to themselves (array types). Defining data refinement we suppose that we have two substitutions S and T working within two different machines (within two distinct variable spaces represented say, by two variables x and y).We assume that these variables are members of the two respective sets s and t so that x ∈ s and y ∈ t are respective invariants of these machines.We suppose that these variables are related by a certain binary relation v from s to t such that ran(v) is equal to t.The relation v is called the abstraction relation. Now, the refinement of the abstract machines is defined as follows: a machine N is said to refine a machine M if a user can use N instead of M without noticing it. Syntactic construct refinement is introduced that resembles a machine.However, a refinement can refine either a machine or another refinement.The invariant clause of refinement is just the abstraction relation defined above: it expresses the change of variables between the two constructs.The operations of the refinement only involve the variables of the refinement, not of the construct being refined.At another extreme pure algorithmic refinement can take place: in this case the variables of the refinement and of the construct being refined are the same.In a refinement new given sets may be introduced as well as a more precise value for a given deferred set. After the refinement is specified it is necessary to prove that it indeed refines what it is claimed to refine.For this a number of proof obligations are generated according to the following templates related to the following abstract machine and its refinement: The proof obligations templates look as follows: ∃(x, y) Here V stands for substitution V within which the variable z has been replaced by z . The first of the proof obligations is an existence proof obligation for the new variables of the refinement.The second deals with the correct refinement of initialization and the third of them deals with the correct refinement of operations. Abstract machine resulting from the Proposal type mapping To save space we have omitted from the example mentioning of the machines constituting the environment into which the specified machine are embedded.We do not show also all the operations of the machines.The machines above were proved by I.A.Chaban using the B-Toolkit environment [1].The proof justifies consistency of semi-formal specifications and correctness of the design with reuse of the pre-existing components. CONCLUSION The CBSD method complementing industrial OAD methods and interoperable environments is overviewed.The design technique proposed is based on the interoperable reuse of pre-existing components.For that the coherence of contexts of the problem domains and of the resources is negotiated, the search of the relevant resource specification is supported, the discrepancy reconciliation approaches, concretization view construction technique are provided. The CBSD method is based on heuristic provisions for search and composition of relevant components into views serving as refinements for the specification of requirements.Formal specification languages are far from being ideal tool for exploring and discovering the problem structure during the refinement.On the other hand, object models widely used in the OAD CASEs are not sufficiently semantically rich to cope with the CBSD open issues identified. To provide adequate semantic facilities suitable for heuristic methods augmented with formal proof and refinement facilities the SYNTHESIS modeling tools are separated into semi-formal and formal ones.The uniform purely behavioral object model was developed for the former.Model-theoretic facilities were used for the latter. The paper contributes to understanding of semi-formal and formal modeling facilities interaction in forward and backward phases of the semantically interoperable information systems design. Fig. 1.SYNTHESIS method structure OpN ame ∼ = pre Q then T end; end machine Identifier refines AM Identifier variables y invariant R initialization U operations z ←− OpN ame ∼ = pre L then V end; end Based on the analysis of the resource classes corresponding to the application class the concretization axioms stating how to refine the application class states by the states of the relevant resource classes should be constructed.From the analysis of the submission/project specification the designer should deduce the following concretization axioms showing how to refine proposal class state by the submission/project classes state.axioms is a predicate constraining admissible states of the instances of the virt proposal class.
6,501.6
0001-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Evaluation of a field-deployable reverse transcription-insulated isothermal PCR for rapid and sensitive on-site detection of Zika virus Background The recent emergence of Zika virus (ZIKV) in Brazil and its precipitous expansion throughout the Americas has highlighted the urgent need for a rapid and reliable on-site diagnostic assay suitable for viral detection. Such point-of-need (PON), low-cost diagnostics are essential for ZIKV control in vulnerable areas with limited resources. Methods We developed and evaluated a ZIKV-specific field-deployable RT-iiPCR reagent set targeting the E gene for rapid detection of ZIKV in ZIKV-spiked human and mosquito specimens, and compared its performance to the Center for Disease Control and Prevention (CDC) and Pan American Health Organization (PAHO) RT-qPCR assays targeting the E and NS2B genes, respectively. Results These assays demonstrated exclusive specificity for ZIKV (African and Asian lineages), had limits of detection ranging from 10 to 100 in vitro transcribed RNA copies/μl and detection endpoints at 10 plaque forming units/ml of infectious tissue culture fluid. Analysis of human whole blood, plasma, serum, semen, urine, and mosquito pool samples spiked with ZIKV showed an agreement of 90% (k = 0.80), 92% (k = 0.82), 95% (k = 0.86), 92% (k = 0.81), 90% (k = 0.79), and 100% (k = 1), respectively, between the RT-iiPCR assay and composite results from the reference RT-qPCR assays. Overall, the concurrence between the ZIKV RT-iiPCR and the reference RT-qPCR assays was 92% (k = 0.83). Conclusions The ZIKV RT-iiPCR has a performance comparable to the reference CDC and PAHO RT-qPCR assays but provides much faster results (~1.5 h) with a field-deployable system that can be utilized as a PON diagnostic with the potential to significantly improve the quality of the health care system in vulnerable areas. Electronic supplementary material The online version of this article (10.1186/s12879-017-2852-4) contains supplementary material, which is available to authorized users. Background Zika virus (ZIKV) is a mosquito-borne flavivirus first isolated in 1947 from a febrile rhesus macaque monkey in the Zika Forest of Uganda and subsequently identified in infected Aedes africanus mosquitoes [1,2]. Human infection was first reported in Nigeria in 1954 [3], however ZIKV remained in relative obscurity for nearly 60 years until a change in its infection pattern was observed with the occurrence of the first major outbreak in Yap (Federated States of Micronesia) where approximately 74% of the population were infected and 18% of the infected people developed symptomatic disease [4], typically characterized by an acute, mild febrile illness of short duration. Since then, ZIKV has spread throughout the Pacific, and serosurveillance studies suggest that ZIKV infection is widespread throughout Africa, Asia, and Oceania [4][5][6][7]. In March 2015, ZIKV was first identified in the Americas associated with an extensive outbreak of exanthematous illness in Bahia, Brazil with an estimate of 1.3 million suspected cases by December 2015 [8][9][10][11]. The virus precipitously spread throughout the Americas and has now been reported in at least 33 countries including Puerto Rico, US Virgin Islands, and the continental US [5,6,12,13]. Although the vast majority of infected individuals (approximately 80%) remain asymptomatic, ZIKV can cause a wide range of clinical manifestations ranging from a mild, acute febrile illness to severe neurologic disease (i.e. Guillain-Barré syndrome), and devastating congenital anomalies including microcephaly, ocular malformations, and other neurologic defects [5,6,[37][38][39][40][41][42][43]. However, in adult individuals where clinical manifestations do occur they are usually mild, selflimiting, and non-specific associated with an acute febrile illness characterized by low-grade (~38°C) and short-term (2-7 days) fever, fatigue, rash, arthralgia, myalgia, headache, and conjunctivitis. These clinical signs are indistinguishable from those induced by many other flaviviral or alphaviral infections. Hence, laboratory diagnosis of ZIKV is mandatory to confirm the clinical diagnosis [5,43,44]. Therefore, the availability of rapid, reliable, and relatively low cost diagnostic tools is of utmost importance for ZIKV control and management. Currently, clinical diagnosis of ZIKV infection relies on serological assays for the detection of antibodies (including rapid lateral-flow immunochromatographic assays, IgM capture enzyme-linked immunosorbent assay [MAC-ELISA], and plaque reduction neutralization test [PRNT]) and molecular-based assays for the detection of viral nucleic acids (conventional or quantitative, real-time reverse transcription polymerase chain reaction [RT-qPCR]) [43][44][45]. Serological assays do not offer a suitable specificity due to the extensive antibody cross-reactivity with other flaviviruses [43][44][45]. In contrast, molecular-based assays for detection of ZIKV RNA (e.g. RT-qPCR) are high throughput, sensitive, and highly specific. Several conventional and RT-qPCR assays have been described [43][44][45][46][47][48][49][50][51]. To date, there are two ZIKV RT-qPCR assays validated by the Center for Disease Control and Prevention (CDC; Atlanta, GA, USA) which target the prM and E genes [17], and an NS2B-specific RT-qPCR assay recently developed by the Pan American Health Organization (PAHO) in response to the ZIKV outbreak in South America which intends to replace the CDC-validated ZIKV prM RT-qPCR assay of lower sensitivity [43]. However, the use of RT-qPCR assays as diagnostic tests requires centralized laboratory facilities, trained personnel, expensive equipment, and extended turnaround times associated with sample transportation over large distances. Consequently, RT-qPCR assays are not suitable for use within clinical settings in rural areas or may not be available in areas with poor resources including developing countries where ZIKV is spreading at an accelerated rate. Therefore, the socio-economic gap implies that a significant number of suspected cases do not have access to appropriate testing. For these reasons, point-of-need (PON) molecular detection tools for easy, rapid, reliable, inexpensive, and on-site ZIKV testing can not only significantly improve the quality of the health care system in vulnerable areas, but also ensure rapid testing in blood banks and provide enhanced field surveillance of ZIKV transmission with an overall impact of major significance on public health. To date, only three potential PON, molecular-based assays to detect ZIKV RNA have been developed, although not extensively evaluated on target diagnostic specimens [52][53][54]. Recently, a fluorescent probe hydrolysis-based insulated isothermal PCR (iiPCR) for amplification and detection of nucleic acids has been described [55] for a number of important pathogens including DENV, Middle East respiratory syndrome coronavirus (MERS-CoV), and Plasmodium spp. in human specimens [56][57][58]. The iiPCR is highly sensitive and specific for the detection of both DNA and RNA not only from human, but also various animal pathogens [59][60][61][62][63][64][65][66][67][68][69][70]. The PCR reaction (denaturation, annealing, and extension) is accomplished in a capillary vessel (R-tube™; GeneReach USA, Lexington, MA, USA) heated through the bottom end of the tube where, based on the Rayleigh-Bénard convection principle, the fluids cycle through temperature gradients. The results are ready in a short time (~1.5 h) within a field-deployable device (POCKIT™ Nucleic Acid Analyzer, GeneReach USA). Integration of the hydrolysis probe technology and an optical detection module allows automatic detection and interpretation of iiPCR results in the form of "positive" or "negative" readouts in a relatively low-cost device [55] (Fig. 1). In this study, we developed and evaluated a PON onestep RT-iiPCR reagent set targeting the E gene for the detection of ZIKV RNA from spiked-in specimens in a field-deployable system (POCKIT™). The analytical sensitivity and specificity were extensively analyzed and compared to the reference CDC (prM and E genes) and PAHO (NS2B gene) singleplex RT-qPCR assays. Subsequently, the performance of the three assays was compared using ZIKV-spiked specimens (including whole blood, plasma, serum, semen, and urine) and homogenized mosquito pools. Tissue culture fluid (TCF) derived from Vero cells infected with ZIKV PRVABC59 (ATCC® VR-1843™), FLR (ATCC® VR-1844™), and MR766 (ATCC® VR-1838™) strains were used for analytical sensitivity and specificity evaluation of ZIKV-specific RT-qPCR and RT-iiPCR assays. Briefly, confluent monolayers of Vero cells were inoculated with a 1/10 dilution of ZIKV PRVABC59, FLR, and MR 766 strains in a minimal volume of maintenance media without fetal bovine serum. After 1 h adsorption at 37°C, monolayers were overlaid with complete EMEM and incubated at 37°C and 5% CO 2 until 100% cytopathic effect was observed (72 h post infection). Infected flasks were frozen/thawed, clarified by centrifugation at 1500 X g for 15 min at 4°C, aliquoted, and stored at −80°C. Mosquito cell lines, C6/36 and AP-61, were infected in a similar fashion. Viral stocks were subsequently titrated in confluent 6-well plates of Vero cells. Briefly, serial ten-fold dilutions (10 −1 -10 −12 ) of virus stocks were prepared in 1X MEM (Mediatech, Inc.) and 200 μl of each dilution were added in duplicate wells. After 1 h adsorption at 37°C and 5% CO 2 , infected monolayers were overlaid with complete EMEM supplemented with 0.75% carboxymethylcellulose (Sigma-Aldrich, St. Louis, MO) and incubated for 96 h. This system includes a compact automatic nucleic acid extraction device (taco™ mini) and a portable PCR device (POCKIT™). After sample collection, nucleic acids are extracted using a preloaded extraction plate in approximately 30 min and, subsequently, the lyophilized RT-iiPCR reaction is reconstituted and nucleic acids are added and tested. TaqMan® probe hydrolysis-based amplification signals are detected and automatically processed, providing qualitative results on the display screen after 60 min Monolayers were stained with a 1% crystal violet solution, and viral titers expressed as plaque forming units per ml (PFU/ml) of TCF. Human blood, urine and semen samples and mosquitoes Unused whole blood (6 ml tubes containing EDTA) and serum (6 ml clot tubes) from 20 healthy donors were obtained through the Kentucky Blood Center, Beaumont Centre Circle, Lexington, KY to be used in this study. Archived urine samples from healthy volunteers were obtained from the BioBank, Center for Clinical and Translational Science, Chandler Medical Center, College of Medicine, University of Kentucky, Lexington, Kentucky. All the donors have provided informed consent at the time of sample submission and the specimens were coded and individual identifiers were permanently removed from specimens. Human semen samples were obtained from a commercial source (Lee Biosolutions, Inc., Maryland Heights, MO, USA). Dead A. albopictus mosquitoes were obtained from the Department of Entomology, College of Agriculture, Food and Environment, University of Kentucky, Lexignton, Kentucky. ZIKV-spiked human specimens (whole blood, plasma, serum, semen, and urine) Whole blood and serum specimens from each donor were separated into six aliquots and spiked with ZIKV PRVABC59 strain (1 × 10 7 PFU/ml of TCF) to yield different viral titers (10 6 , 10 3 , 10 2 , 10, and 1 PFU/ml) of whole blood or serum. One aliquot of both whole blood and serum from each donor was inoculated with an equivalent volume of uninfected EMEM as mock-spiked control. Overall, a total of 20 specimens for each viral concentration were generated (20 donors x [five different viral concentrations plus one mock-spiked control] = 120 whole blood/serum specimens). An aliquot from each spiked whole blood specimen (n = 120) was stored at −80°C until nucleic acid extraction, while the remaining was centrifuged at 1000 X g for 10 min at 4°C for plasma separation (n = 120) and stored at −80°C until nucleic acid extraction. Spiked serum samples (n = 120) were stored at −80°C until nucleic acid extraction. Similarly, a total of 20 archived urine samples (stored at −80°C) were obtained from volunteer donors (males/ females). Urine samples (n = 120) were separated and spiked with different concentrations of ZIKV PRVABC59 strain as described above and stored at −80°C until nucleic acid extraction. In addition, a total of 4 pooled whole semen samples (1 ml each, 3 human donors per pool, 12 total human donors) from healthy, certified infectious disease-free male donors were purchased from Lee Biosolutions, Inc. Each pool was separated into 6 aliquots and spiked with different concentrations of ZIKV PRVABC59 strain to reach viral titers of 10 6 , 10 3 , 10 2 , 10, and 1 PFU/ml of whole semen as explained above. One aliquot was inoculated with an equivalent volume of uninfected EMEM as mock-spiked control. Spiked aliquots were stored at −80°C until nucleic acid extraction. Nucleic acid extraction Nucleic acids from TCF, spiked human specimens (whole blood, plasma, serum, semen, and urine), and spiked mosquito pools were extracted using an automated magnetic bead-based extraction system (taco™ mini, GeneReach USA) as previously described [56,66]. Briefly, 200 μl of TCF, spiked whole blood, plasma, serum, urine, or supernatant derived from mosquito pools were added into the first well of a taco™ Preloaded DNA/RNA Extraction plate (GeneReach USA) containing lysis buffer and subjected to the extraction steps as described in the manufacturer's user manual. Elution was performed with 200 μl of Elution buffer. Due to sample limitations, 100 μl of spiked semen samples were used and nucleic acids were eluted with 100 μl of Elution buffer. All nucleic acids were stored at −80°C for future use. Synthesis of target genes and in vitro transcribed RNA preparation ZIKV-specific in vitro transcribed (IVT) RNA was synthesized in order to determine the analytical sensitivity of the ZIKV-specific RT-iiPCR and compared with the ZIKV-specific CDC (prM and E) and PAHO (NS2B) RT-qPCR assays. For this purpose, a 614 nt insert con- Scientific, Regensburg, Germany). Subsequently, E. coli K12 DH10B™ T1R were transformed with the construct. Transformed bacteria were cultured overnight at 37°C with shaking (270 rpm). Plasmid DNA was purified using QIAprep Spin Miniprep kit (Qiagen, Valencia, CA) following the manufacturer's instructions and screened by restriction digestion using the unique EcoRI, BamHI, and HindIII restriction sites within and flanking the insert. Sequence authenticity was confirmed by Sanger sequencing using T7 and SP6 promoter-specific primers. Plasmid DNA (1 μg) was linearized using HindIII, purified using the High Pure PCR Product Purification kit (Roche, Indianapolis, IN) as instructed, and 0.5 μg of plasmid DNA was used for in vitro transcription of the ZIKV MENS2B5 insert using the Megascript® T7 Transcription kit (ThermoFisher Scientific, Waltham, MA) following the manufacturer's recommendations. Residual plasmid DNA was removed by digestion with TURBO™ DNase (ThermoFisher Scientific) for 15 min at 37°C. The IVT RNA product was analyzed by agarose gel electrophoresis, subjected to a clean-up procedure using the MEGAclear™ Transcription Clean-Up kit (ThermoFisher Scientific), and quantified using a NanoDrop 2000 spectrophotometer (ThermoFisher Scientific). The ZIKV MENS2B5 IVT RNA was stored at −80°C until used. The number of ZIKV IVT RNA molecules per microliter (copies/μl) was calculated according to the following formula: The concentration of ZIKV IVT RNA was adjusted to 10 7 copies/μl using nuclease-free water containing 40 ng/ μl of Ambion® Yeast tRNA (ThermoFisher Scientific), and serially ten-fold diluted (10 7 − 0.1 IVT RNA copies/μl) using nuclease-free water containing Ambion® Yeast tRNA. ZIKV-specific TaqMan® real-time RT-PCR assays The CDC-validated ZIKV-specific TaqMan® RT-qPCR assays targeting prM and E genes along with the PAHO ZIKV-specific TaqMan® RT-qPCR assay targeting NS2B gene were utilized as previously described. Primer and probe sequences as well as fluorescent dyes and quenchers used are shown in Table 2. The reaction was set up using the QuantiTect Probe RT-PCR kit (Qiagen) following the manufacturer's recommendations. Briefly, the 25 μl reaction contained 12.5 μl of 2X QuantiTect Probe RT-PCR Master Mix with ROX, 0.25 μl QuantiTect RT Mix, 200 nM TaqMan® fluorogenic probe, 500 nM each primer, and 5 μl of template RNA. Reverse transcription and amplification were carried out in an ABI 7500 Fast Real-time PCR System (Applied Biosystems®, Life Technologies, Grand Island, NY). The program included 30 min at 50°C (reverse transcription step), 15 min at 95°C (PCR initial activation step), followed by 45 cycles at 94°C for 15 s (denaturation) and 60°C for 1 min (combined annealing/ extension). Even though the analytical sensitivity and specificity of all ZIKV-specific RT-qPCR assays (targeting prM, E, and NS2B) were evaluated, only the ZIKV RT-qPCR assays targeting E and NS2B were used to assess their performance in ZIKV-spiked specimens and to compare with the performance of the ZIKV RT-iiPCR reagent set. Amplification with one of the two ZIKV RT-qPCR assays (E and NS2B) determined a sample as positive, with a cutoff Ct value of ≤38.5 as described by Lanciotti, et al. [17]. Samples with 38.5 < Ct value ≥45 were considered inconclusive. ZIKV-specific reverse-transcription insulated isothermal PCR The ZIKV-specific RT-iiPCR (POCKIT™ Zika Virus Reagent Set) assay was designed to target the E gene of ZIKV (proprietary). The RT-iiPCR reaction conditions, such as concentrations of primers and probe, Taq DNA polymerase, and reverse transcriptase, were tested systematically to obtain the highest sensitivity and specificity. Following optimization of the RT-iiPCR assay conditions, the reagents including primers and probe were lyophilized (proprietary) and used in this study. Briefly, after reconstituting the lyophilized pellet with 50 μl of Premix Buffer B (GeneReach USA), 5 μl of the sample nucleic acid was added to the reaction. Subsequently, 50 μl of the final mixture was transferred into an R-tube™ (GeneReach USA), sealed with a cap, spun for 10 s in a cubee™ centrifuge (GeneReach USA), and placed into a POCKIT™ device (GeneReach USA). The default program, that included an RT step at 50°C for 10 min and an iiPCR step at 95°C for 30 min, completed in less than one hour. Signal-to-noise (S/N) ratios, i.e. light signals collected after iiPCR/fluorescent signals collected before iiPCR [65], were converted automatically to "+", "-", or "?" according to the default S/N thresholds by the built-in algorithm. The results were shown on the display screen at the end of the program. A "?" indicated that the results were ambiguous and the sample should be tested again (Fig. 1). Statistical analysis Standard curves were performed using nucleic acids prepared from a serial dilution series of both a ZIKV-infected TCF stock (1 × 10 7 PFU/ml) and IVT RNA (10 7 to 0.1 IVT RNA copies/μl). Pearson correlation coefficients (R 2 ) were used to assess curve fitness. PCR amplification efficiencies (%) were calculated using the following formula: E ¼ 10 − 1 slope −1 h i  100 after regression analysis. Limit of detection with 95% confidence (LOD 95% ) was determined by statistical probit analysis (a non-linear regression model) using the commercial software SPSS 14.0 (SPSS Inc., Chicago, IL, USA) for all assays (ZIKV prM, E, and NS2B RT-qPCR, and ZIKV E RT-iiPCR). The performance of ZIKV RT-iiPCR in spiked-in specimens was compared to the combined use of E and NS2B RT-qPCR assays; the overall degree of agreement between the assays (combined CDC E and PAHO NS2B RT-qPCR vs. RT-iiPCR) was evaluated for the total number of specimens, and also by sample type categories independently. Contingency tables (2 × 2) for ZIKV-and mock-spiked samples were generated to estimate the relative sensitivity and specificity of each assay per sample category, and compared using the McNemar's test for paired data. The level of significance was set at 0.01. Results Comparison of the analytical sensitivity and specificity of the ZIKV RT-iiPCR and reference CDC and PAHO ZIKV RT-qPCR assays (i).Analytical sensitivity. The analytical sensitivity of the PON ZIKV RT-iiPCR was determined using a (a) ten-fold dilution series (six replicates per dilution) of ZIKV IVT RNA (10 7 to 0.1 IVT RNA copies/μl) containing the target sequence, and (b) ten-fold serial dilutions (10 0 -10 −13 ) of nucleic acid extracted from TCF derived from ZIKV PRVABC59infected Vero cells containing a viral titer of 10 7 PFU/ml. These samples were also used to determine the analytical sensitivities of the CDC-validated prM and E, and PAHO-validated NS2B ZIKV RT-qPCR assays (Tables 3 and 4). Standard curves generated for the three RT-qPCR assays using both a serial dilution of infectious TCF and IVT RNA demonstrated perfect linearity (R 2 > 0.99) and optimal amplification efficiencies ranging between 97% and 105% (data not shown). For the ZIKV IVT RNA serial dilution, the RT-iiPCR showed 100%, 83%, 83%, 17%, and 0% detection rates for reaction mixtures containing 1000; 100; 10; 1; and 0.1 IVT RNA copies/μl, respectively (Table 3), and a 100% detection endpoint at 10 PFU/ml of infectious TCF (PRVABC59 strain, Table 4). Probit analysis determined that the limit of detection 95% (LOD 95% ) of the ZIKV RT-iiPCR was 130 copies/μl of ZIKV IVT RNA. Regarding the CDC and PAHO RT-qPCR assays, the 100% detection endpoints were found at 10,000 IVT RNA copies/μl and 100 PFU/ml of infectious TCF for the prM RT-qPCR assay, 100 IVT RNA copies/μl and 10 PFU/ml of infectious TCF for the E RT-qPCR assay, and 10 IVT RNA copies/μl and 10 PFU/ml of infectious TCF for the NS2B RT-qPCR assay, respectively (Tables 3 and 4). LOD 95% was estimated at 4102; 21; and 6 IVT RNA copies/μl for the prM, E, and NS2B RT-qPCR assays, respectively. Therefore, the overall analytical sensitivity of the ZIKV RT-iiPCR was comparable to that of the CDC E and PAHO NS2B RT-qPCR assays in detecting viral RNA, while having a higher performance when compared to that of the CDC prM RT-qPCR assay. (ii). Analytical specificity. The specificity and pan-reactivity of the ZIKV CDC and PAHO RT-qPCR and RT-iiPCR assays were evaluated using a panel of reference viral RNA from different ZIKV strains (African and Asian lineages) as well as other flaviviruses and alphaviruses that frequently cause similar clinical symptoms including DENV serotypes 1-4, WNV, YFV, and CHIKV ( Table 1). The CDC prM and PAHO NS2B RT-qPCR assays were able to detect all ZIKV strains from the Asian lineage. Table 5). The CDC E RT-qPCR and the RT-iiPCR assays successfully detected all ZIKV strains from both lineages (Table 5). Moreover, the RT-iiPCR detected ZIKV RNA (PRVABC59 and FLR [Asian lineage], and MR766 [African lineage] strains) derived from both infected mammalian (Vero) and mosquito (C6/36 and AP-61) cell lines. All assays were highly specific and did not detect any other related flaviviruses or CHIKV (Table 5). Performance evaluation of the RT-iiPCR using ZIKV-spiked human samples As a result of the lower analytical sensitivity of the CDC prM RT-qPCR assay, the performance of the ZIKV RT-iiPCR was evaluated and compared to the CDC E and PAHO NS2B RT-qPCR assays as recently recommended [43] using specimens (n = 481, including whole blood, plasma, serum, semen, urine, and mosquitos) spiked with different concentrations of ZIKV PRVABC59 strain. Negative controls were generated by the addition of non-infected TCF to aliquots of the same clinical samples (mock-spiked). Table S1). All ZIKV-spiked samples that yielded false negative results using the CDC-PAHO RT-qPCR assays (40/100) contained ≤100 PFU/ml. Among these, eight samples yielded inconclusive results with Ct > 38.5 for at least one of the RT-qPCR assays, with a titer range within 10 (n = 6) to 1 (n = 2) PFU/ml of whole blood. Detection rates per viral titer are shown in Table S1). All samples that yielded false negative results using the CDC-PAHO RT-qPCR assays (22/100) were spiked with ≤10 PFU/ml of ZIKV PRVABC59 strain (Table 6), which demonstrated a lower detection limit compared to other blood-derived specimens (i.e. whole blood and plasma). Among these, nine samples yielded inconclusive results with Ct > 38.5 for at least one of the RT-qPCR assays, with 7/9 having a titer of 1 PFU/ml and 2/9 having a titer of 10 PFU/ml. In contrast, the RT-iiPCR showed a higher detection rate and identified 90/100 ZIKV-spiked samples while none of the mock-spiked samples yielded positive results (0/20). Those samples that yielded false negative results (10/100) had viral titers of 1 PFU/ml ( Table 6). The highest level of agreement between assays was observed for this sample type among other blood-derived specimens (95%; k = 0.86 [CI 95%: 0.76-0.97]) ( Table 7). Even though the RT-iiPCR had a higher detection rate than the RT-qPCR assays, both assays consistently detected viral RNA in samples containing as low as 100 PFU/ml of virus. (iv). Semen. Since it has been recently demonstrated that ZIKV can be sexually transmitted from infected individuals, we assessed the performance of the ZIKV RT-iiPCR in spiked semen samples. Each of a total of 4 pooled semen samples (semen from three individuals per pool [total of 12 semen samples]) were spiked with 10 6 , 10 3 , 10 2 , 10, 1 and 0 (mock) PFU/ml of ZIKV PRVABC59 strain (n = 24). The CDC-PAHO RT-qPCR detected 15/20 ZIKV-spiked samples while none of the negative samples yielded positive results (0/4) (Additional file 1: Table S1). One out of the 5 false negative results obtained using the CDC-PAHO RT-qPCR assays contained a viral titer of 10 PFU/ml, while the other samples that yielded negative results had viral titers of 1 PFU/ml of semen ( Table 6). The RT-iiPCR detected 17/20 positive samples and 0/4 negative samples. The three ZIKV-spiked samples that were undetectable had viral titers of 1 PFU/ml (Table 6). In summary, the agreement between the two assays was 92% (k = 0.81 [CI 95%: 0.57-1]) ( Table 7). Table S1). All samples that yielded false negative results (43/100) were spiked with ≤100 PFU/ml of ZIKV PRVABC59 strain (Table 6). Among the eight samples that yielded inconclusive results (Ct > 38.5 for at least one of the RT-qPCR assays), 5/8 and 3/8 had a titer of 10 PFU/ml and 100 PFU/ml, respectively. In contrast, the RT-iiPCR showed a higher detection rate and identified 73/100 ZIKV-spiked samples while none of the mock-spiked samples yielded positive results (0/20). Those samples that yielded false negative results (27/100) had viral titers ≤10 PFU/ml ( Table 6). The level of agreement between assays for this sample type was 90% (k = 0.79 [CI 95%: 0.67-0.89]) ( Table 7). Performance evaluation of the RT-iiPCR using ZIKV-spiked mosquito pools The performance of the PON ZIKV RT-iiPCR was also evaluated in ZIKV-spiked mosquito pool specimens to assess its suitability as a rapid surveillance test in the vector population. Six mosquito pools (A. albopictus, n = 15 per pool) spiked with ZIKV PRVABC59 strain at concentrations ranging from 10 6 to 1 PFU/ml of mosquito pool homogenate (equivalent to 6 × 10 4 -0.06 PFU/mosquito) and a mockspiked A. albopictus pool (n = 15) were evaluated. Both the combined CDC-PAHO RT-qPCR and RT-iiPCR correctly identified 5/7 ZIKV-(10 6 − 10 PFU/ ml) and mock-spiked mosquito pools, with the exception of that containing 1 PFU/ml (Table 6), indicating 100% agreement between assays (Table 7). Overall performance comparison between ZIKV RT-iiPCR and the reference CDC and PAHO ZIKV RT-qPCR assays Analysis of a total of 481 spiked and mock-spiked whole blood, plasma, serum, semen, urine, and mosquito pool specimens (excluding samples that yielded inconclusive RT-qPCR results) determined an overall agreement of In contrast, no statistical differences in sensitivity were observed for serum, semen, and mosquito pool specimens between assays. Discussion ZIKV has caused a major pandemic in the Americas during 2015-2016, with serious repercussions to the healthcare system in Brazil as well as other Caribbean countries [1, 4-8, 10, 11, 38, 42, 71]. In addition to its vector-mediated transmission, it has been demonstrated that ZIKV can be shed in the semen of infected male patients and be effectively transmitted during sexual intercourse [27][28][29][30][31][72][73][74][75][76]. Furthermore, it also poses a significant threat to the blood bank network [33][34][35][36]77]. Even though there are two CDC-validated and one PAHO-validated RT-qPCR assays for molecular diagnosis of ZIKV infection [17,43], these are not suitable for use within clinical settings in rural areas or may not be available in areas with limited resources including developing countries where ZIKV is spreading at an accelerated rate. This disease, among other mosquito-borne infections, adds impetus to the development of accurate, rapid, inexpensive, and on-site detection methodologies (i.e. PON) that can aid in the clinical management of affected patients, disease surveillance, and control of epidemics in vulnerable areas and also ensure rapid testing of blood and blood products in blood banks. Here, we report the development and evaluation of a PON molecular detection test (RT-iiPCR assay) for the detection of ZIKV RNA in diverse human specimens that are likely to be encountered under field conditions. Furthermore, we determined that this assay is appropriate for detection of ZIKV RNA in homogenized mosquito pools, demonstrating its potential utility for monitoring viral prevalence in vector populations. This assay is based on the iiPCR technology [55], and it is designed for use in conjunction with a fully field-deployable device (POCKIT™ Nucleic Acid Analyzer, GeneReach USA) that allows rapid amplification and detection of viral nucleic acids (~1.5 h from sample to result, including nucleic acid extraction time [ Fig. 1]). A number of iiPCR-based assays have been developed for detection of human and animal pathogens [56,57,[59][60][61][62][63][64][65][66][67][68][69][70] with two of the most recent additions being directed against all serotypes of DENV and MERS-CoV [56,58]. The sensitivity and specificity of all iiPCR-based assays have demonstrated to be comparable with other diagnostic methods currently in use (e.g. RT-qPCR, nested PCR, virus isolation). However, RT-iiPCR offers several advantages over conventional molecular-based assays (e.g. RT-qPCR assays) including lyophilized reagents that can be transported at ambient temperature, ease of reaction setup, automated detection and simple result interpretation in the form of "+" (positive result) or "-" (negative result), and rapid results (Fig. 1). The POCKIT™ system can be combined with fielddeployable manual or automatic nucleic acid extraction systems (PetNAD™ Nucleic Acid Co-prep Kit or taco™ mini Nucleic Acid automated extraction system [taco™ mini], GeneReach USA) or other column-based extraction systems of choice. Accordingly, a taco™ mini (30 × 26.5 × 26 cm, W x D x H, 5 kg) and a POCKIT™ device (31 × 26 × 15 cm, W x D x H, 2.1 kg) have been combined for field applications (POCKIT™ Combo), and can be powered by a car or rechargeable battery. The POCKIT™ Combo has been accepted as a mobile PCR tool in the management of animal health. Also, a hand-held model, POCKIT™ Micro Plus (6.3 × 15.2 × 5.0 cm, W x D x H; 0.3 kg; GeneReach USA) has been developed for field applications. Recently, feasibility of the combination of POCKIT™ Micro Plus and the automatic taco™ mini was demonstrated in a field test carried out in Vietnam for monitoring avian influenza A viruses in poultry markets. Test results using a influenza A RT-iiPCR reagent set were comparable to those of an RT-qPCR in a central laboratory (unpublished data). Furthermore, we have clearly demonstrated that this platform can be used for dengue and MERS-CoV diagnosis in human clinical samples [56,58]. Thus, the POCKIT™ system plus the automatic bead-based taco™ mini is potentially suitable for use as a PON tool for ZIKV detection in clinical specimens. To date, three other PON assays based on the use of either biomolecular sensors/CRISPR-based technology, reverse transcription loop-mediated isothermal amplification (RT-LAMP), or reverse transcription strand invasion based amplification (RT-SIBA) technologies have been described for detection of ZIKV RNA [52][53][54]. While these methods provide rapid, on-site results, they offer a limited sensitivity [52], limited specificity [54], or have not been compared to the CDC or PAHO-validated RT-qPCR of routine use in diagnostic laboratories [53]. In addition, their performance has not been evaluated in a large set of specimens. Instead, the ZIKV RT-iiPCR assay involves the use of a technology specifically developed for field application and which has already been validated for detection of several major pathogens in clinical samples, and is based on the TaqMan® chemistry which is less likely to yield false positive results. Detection of ZIKV RNA can be achieved in several sample types derived from infected individuals including blood-derived samples (whole blood, plasma, serum), other body fluids (semen, urine, saliva, vaginal secretions), and cytological specimens [43,[78][79][80]. The period of time during which viral RNA is detectable varies depending on the sample type as well as individual variation, ranging from a short (transient viremia) to a prolonged time post-infection in the case of other body fluids such as urine, semen, and saliva. Even though detection of ZIKV RNA during the viremic period is usually possible within the first week after disease onset [6,81], a recent study has estimated that ZIKV RNA loss occurs at a median of 14 days in serum (95th percentile up to 54 days), 8 days in urine (95th percentile up to 39 days), and 34 days in semen (95th percentile up to 81 dpi) in infected humans [79]. However, ZIKV has been detected for as long as 6 months in semen of some individuals [76]. Viral titers are also variable depending on the clinical specimen tested, days post-infection, and other factors. Viremia titers can range from 2 to 10 6 PFU/ml (~9 × 10 2 -7.3 × 10 5 viral RNA copies/ml) of blood [17,46] while urine titers seem to be frequently within the 10 to 10 3 PFU/ml range (~4.3 × 10 2 -2.5 × 10 5 viral RNA copies/ml) [82]. Interestingly, seminal shedding occurs at very high viral loads (2.9 × 10 8 -1.2 × 10 3 viral RNA copies/ml) [75]. In this study, specimens were spiked over a range of viral concentrations according to the estimated viral titers observed in ZIKV naturally infected individuals. Since the RT-iiPCR, CDC E, and PAHO NS2B assays showed an equal 100% detection rate (10 PFU/ml) and strong agreement between each other (k = 0.83), it is expected that the RT-iiPCR would have a similar clinical performance as the reference RT-qPCR assays and be suitable for detecting clinical specimens with at least a viral titer of 10 PFU/ml, while lower viral titers as those observed during late viremia may offer challenges and, consequently, the use of other tests may be more suitable at that stage of infection (i.e. serological tests). Even though we have extensively evaluated this assay using spiked human specimens and mosquitoes, testing of clinical specimens derived from infected individuals and mosquitoes is required to further confirm the performance of this new PON assay under field conditions. In this study, the ZIKV-specific RT-iiPCR assay demonstrated a comparable analytical sensitivity and specificity to reference RT-qPCR assays that have been validated by CDC and PAHO for diagnosis of this flaviviral infection in humans. The ZIKV RT-iiPCR targets a conserved region within the E gene and while it is capable of detecting ZIKV strains from both Asian and African lineages, it showed no reactivity with genomic RNA from other flaviviruses or CHIKV. Regarding the assay's performance in spiked specimens, the RT-iiPCR demonstrated a substantial level of agreement with the reference RT-qPCR assays (92%, k = 0.83). The best performance for both the RT-iiPCR and the reference RT-qPCR assays was observed for plasma, serum, semen, and mosquito pools, with levels of agreement higher than 90%. In the case of ZIKVspiked whole blood, plasma, and urine, false negative results were frequently observed for both the RT-iiPCR and reference RT-qPCR assays in those samples containing ≤100 PFU/ml of ZIKV. Such limitations in the detection of viral RNA in these samples were consistent with results from previous studies [46,48,83]; and may be associated with sample volume, the presence of PCR inhibitors [84][85][86], or extremely low concentrations of target RNA. Even though limited sample volumes may have an impact on the assay's performance, the use of a reduced volume of semen samples in this study (100 μl) did not appear to have detrimental effects on the results. Although this study suggests that serum may be a more suitable sample for PCR-based testing of ZIKV than whole blood or plasma, this needs to be further evaluated using clinical samples from naturally infected patients. Conclusions In conclusion, the ZIKV RT-iiPCR reagent set provides comparable performance to the reference CDC and PAHO RT-qPCR assays currently in use for diagnosis of ZIKV in a variety of spiked specimen types including mosquitoes. Nonetheless, further evaluation of its performance in clinical samples derived from infected patients is warranted. In contrast to the RT-qPCR assays, the RT-iiPCR assay is fully deployable under field conditions and, thus, can be used as a PON assay in remote, resource-deprived areas to provide rapid results (~1.5 h turnaround time from sample to result) at relatively low costs (< 10 USD per RT-iiPCR test vs. ≥ 20 USD per RT-qPCR test) and with the use of reagents that are stable at room temperature for two years without compromising the assay's performance. Therefore, the ZIKV RT-iiPCR could provide a highly effective PON assay that would enhance disease management, screening of blood bank supplies, and viral surveillance in human or insect populations with an improvement of the quality of the health care system of major significance particularly in remote or low-infrastructure areas within developing countries. Availability of data and materials The datasets from this study are available from the corresponding author upon request. Authors' contributions MC designed experiments, prepared the spiked clinical samples, performed RT-iiPCR and RT-qPCR assays and drafted the manuscript; YL propagated and titrated various ZIKV strains used in this study and helped to prepare spiked clinical samples and performed virus nucleic acid extractions, PAL performed data analysis and drafted the manuscript with MC; CFT and PHC designed and developed the ZIKV RT-iiPCR reagents; DW provided the human blood samples; AS helped MC to establish the CDC and PAHO RT-qPCR assays in the laboratory; RFC is the Co-PI and helped to write the manuscript; GB provided the mosquitoes; HGC and HTW are project leaders at GeneReach USA, and they coordinated and supervised the ZIKV RT-iiPCR assay development; and UBRB (principal investigator) conceived the idea to develop a PON assay for the detection of ZIKV RNA in clinical specimens and collaborated with GeneReach USA, secured funding for the project from the College of Agriculture, Food and Environment and the Maxwell H. Gluck Equine Research Center at the University of Kentucky, designed the experiments, directed and supervised the project, and edited the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate Kentucky Blood Center provided the routine donor blood samples collected from allogeneic blood donors for the purpose of donor testing. The donor consent for blood donation includes that the blood sample(s) collected could be used for research purposes. Furthermore, the Institutional Review Board (IRB) designee determined that this project does not require IRB review because the samples utilized in this study were either commercial or de-identified from the Center for Clinical and Translational Science Biorepository (BioBank). Consent for publication All the authors approved the final version of the manuscript. Competing interests MC, YL, DW, AS, RFC, GB, and UBRB declare no competing interests. CT, PAL, PC, HGC, and HTW are affiliated with GeneReach USA, Lexington, MA. However, this does not alter our adherence to BMC Infectious Diseases policies on sharing data and materials.
8,846.6
2017-12-19T00:00:00.000
[ "Medicine", "Biology" ]
Towards a characterization of X-ray galaxy clusters for cosmology In the framework of the hierarchical model the intra-cluster medium properties of galaxy clusters are tightly linked to structure formation, which makes X-ray surveys well suited for cosmological studies. To constrain cosmological parameters accurately by use of galaxy clusters in X-ray surveys, a better understanding of selection effects related to the detection method is needed. We aim at a better understanding of galaxy cluster morphologies to include corrections between the different core types and covariances with X-ray luminosities in selection functions. We stress the morphological deviations between a newly described surface brightness (SB) profile characterization and a commonly used single $\beta$-model. We investigate a novel approach to describe SB profiles, where the excess cool-core emission in the galaxy cluster centres is modelled using wavelet decomposition. Morphological parameters and the residuals are compared to classical single $\beta$-models. Using single $\beta$-models to describe the ensemble of overall SB profiles leads on average to a non-zero bias ($0.032 \pm 0.003$) in the outer part of the clusters, i.e. a $\sim 3\%$ systematic difference in the SB at large radii. In addition $\beta$-models show a general trend towards underestimating the flux in the outskirts for smaller core radii. Fixing the $\beta$ parameter to $2/3$ doubles the bias and increases the residuals from a single $\beta$-model up to more than $40\%$. Modelling the core region in the fitting procedure reduces the impact of these two effects significantly. We find a positive scaling between shape parameters and temperature, as well as a negative correlation ($\sim-0.4$) between extent and luminosity. Our non-parametric analysis of the self-similar scaled emission measure profiles indicates no systematic core-type differences of median profiles in the galaxy clusters outskirts. Introduction Clusters of galaxies are formed from the collapse of initial density fluctuations in the early Universe and grow hierarchically to the densest regions of the large-scale structure. This makes them the most massive (M tot ∼ 10 14 −10 15 M ) gravitationally bound structures in our universe and their virialization timescales are less than the Hubble time. The gas between the galaxies, the intra-cluster medium (ICM), has been heated to temperatures 1 of several 10 7 K by gravitational collapse. The primary emission mechanism of this hot, fully-ionized thermal plasma is thermal bremsstrahlung and line emission of heavy elements, such as iron. The majority (approximately 85%) of the baryonic component is in the form of the hot ICM. Therefore, the most massive visible component can be traced by X-ray emission, which makes X-ray astronomy a great and important tool to study galaxy clusters. However, flux-limited galaxy cluster samples compiled from X-ray surveys suffer from selection effects like Malmquist bias, that is the preferential detection of intrinsically brighter sources (a more detailed discussion of different selection effect biases is compiled in e.g., Hudson et al. 2010;Giodini et al. 2013). Another form of selection effect arises from the different core-types of galaxy clusters. In the central regions of galaxy clusters, gas is able to cool more efficiently compared to the outskirts. Several diagnostics were proposed to identify and categorize galaxy clusters according to their different core-types, for example a central temperature decrease (Sanderson et al. 2006), mass-deposition rates (Chen et al. 2007), cuspiness (Vikhlinin et al. 2007), or surface brightness concentration (Santos et al. 2008). Galaxy clusters exhibiting cool-cores show centrally-peaked surface brightness profiles, whereas non-cool-core clusters have flat profiles. In surveys differently shaped profiles are detected with different efficiencies. Even for the same brightness, cool-core clusters may be more easily detected since their surface brightness profiles are more peaked. Therefore, the central emission sticks out more above the background. The preferential detection of cool-core objects close to the detection threshold of fluxlimited samples leads to the so-called cool-core bias (e.g., Eckert et al. 2011;Rossetti et al. 2017). It is crucial to take such selection effects into account in cosmological studies to obtain unbiased results. One possibility to quantify these biases is running the source detection chain on well-defined simulations. The quantification of the completeness, that is the fraction of detected clusters as function of mass and redshift, requires an accurate galaxy cluster model as input for such simulations. Outside the core regions, scaled radial profiles (e.g., temperature, pressure, or entropy profiles) of galaxy clusters show a so-called "self-similar" behavior (e.g., Zhang et al. 2007;Ghirardini et al. 2019). It is believed that this is the result of a similar formation process of galaxy clusters, namely that tiny density perturbations in the early universe are amplified by gravitational instabilities and grow hierarchically, yielding the largescale structure observed today. Galaxy clusters are then believed to correspond to the densest regions of the large-scale structure. This formation history motivated the theoretical consideration of the self-similar model (e.g., Kaiser 1986), where all galaxy clusters share the same average density and evolve with redshift and mass according to prescriptions given by spherical gravitational collapse. Therefore, galaxy cluster observables such as X-ray luminosity, spectral temperature, or gas mass are correlated to the total cluster mass. Assuming gravity is the dominant process, the self-similar model predicts simple power-law relations between those cluster observables and the total mass, socalled scaling relations (e.g., Maughan 2007;Pratt et al. 2009;Mantz et al. 2010Mantz et al. , 2016Maughan et al. 2012). In this work, we aim toward a proper characterization of galaxy cluster shapes using different surface brightness parameterizations. We investigate scaling relations between surface brightness parameters and temperature by use of the HIghest X-ray FLUx Galaxy Cluster Sample (HIFLUGCS), a statistically complete, X-ray-selected, and X-ray flux-limited sample of 64 galaxy clusters compiled from the ROSAT all-sky survey (RASS, Voges et al. 1999). In addition we study the covariances between shape and other galaxy cluster parameters. The impact of the different core-types on the obtained scaling relations and covariances are quantified. The goal of this paper is to improve our understanding of galaxy cluster shapes. This serves as a basis for simulations quantifying selection effects, amongst others, for the future X-ray all-sky survey performed by the extended Roentgen Survey with an Imaging Telescope Array (eROSITA, Merloni et al. 2012;Predehl et al. 2018). In addition, the obtained covariance matrices can be implemented in current cosmological studies using the COnstrain Dark Energy with X-ray galaxy clusters (CODEX) sample, for example. Throughout this paper a flat ΛCDM cosmology is assumed. The matter density, vacuum energy density, and Hubble constant are assumed to be Ω m = 0.3, Ω Λ = 0.7, and H 0 = 70 km s −1 Mpc −1 with h 70 := H 0 /70 km s −1 Mpc −1 = 1, respectively. The natural logarithm is referred to as "ln" and "log" is the logarithm to base ten. All errors are 1σ unless otherwise stated. The sample The HIghest X-ray FLUx Galaxy Cluster Sample (HIFLUGCS, Reiprich & Böhringer 2002;Hudson et al. 2010) comprises 64 galaxy clusters, constructed from highly complete cluster catalogs based on the ROSAT all-sky survey (RASS). The final flux limit of f X (0.1−2.4 keV) = 20 × 10 −12 erg s −1 cm −2 defines the X-ray-selected and X-ray flux-limited sample of the brightest galaxy clusters away from the Galactic plane. Although statically complete, HIFLUGCS is not necessarily representative or unbiased with respect to the cluster morphology (Hudson et al. 2010;Mittal et al. 2011). Eckert et al. (2011) calculated a significant bias in the selection of X-ray clusters of about 29% in favor of centrally peaked cool-core objects compared to non-cool-core (NCC) clusters. We minimize these kind of selection effects by restricting our study to objects above a temperature of 3 keV. This temperature threshold excludes the low mass galaxy groups, which are closer to the HIFLUGCS flux threshold and therefore have a high cool-core fraction as discussed in Sect. 1. Originally, the cluster RX J1504.1-0248 was not included in HIFLUGCS. This object is a strong cool-core (SCC) cluster that appears only marginally extended in the RASS, meaning that its extent is comparable to the ROSAT survey point-spread function (PSF). To avoid biasing our results because of the small extent of this system compared to the ROSAT PSF, this cluster is excluded in our full analysis. In total, we consider 49 galaxy clusters above our selected temperature threshold of 3 keV. Data analysis Pointed ROSAT observations were used whenever available. The pointed data is reduced with the ROSAT Extended Source Analysis Software package (Snowden et al. 1994) as described in Eckert et al. (2012). Otherwise RASS data from the public archive are used. There are 4 objects without pointed observations above our representative temperature threshold of 3 keV and excluding those from the analysis does not change our results significantly. Therefore, we do not expect our results to be affected by the use of heterogeneous data. All images for count rate measurements are restricted to the ROSAT hard energy band (channels 42−201 ≈ 0.4−2.0 keV) due to higher background levels in the soft band. Luminosities are taken from Reiprich & Böhringer (2002), which means that they are based on ROSAT data and are not corrected for cooling flows. The central temperature drop of the ICM in cool-core clusters biases the estimation of the cluster virial temperature, that is the temperature of the hot gas which is in hydrostatic equilibrium with the potential well of the cluster. This bias is a source of scatter in scaling relations related to temperature and can be minimized by excluding the central region for the temperature fitting. Since we are interested in how the galaxy cluster shape parameters scale with temperature, we adapt core-excised HIFLUGCS temperatures measured by Hudson et al. 2010 using Chandra's Advanced CCD Imaging Spectrometer (ACIS) data. We re-scale all temperatures greater than 2 keV due to the Chandra calibration package updates according to the Mittal et al. (2011) best-fit relation which links temperature measurements (in keV) between the Calibration Database (CALDB) 3.2.1 and CALDB 4.1.1. The reader is referred to the aforementioned papers in this subsection for a detailed description of individual data analysis steps. Masses We adapt M 500 values of the "Union catalog" (Planck Collaboration XI 2016) calculated by Planck Sunyaev-Zel'dovich (SZ) observations, which contains detections with a minimum signal-to-noise of 4.5. We note that SZ mass uncertainties are small due to being purely statistical and are not propagated when rescaling radii. The advantage of using masses calculated by the use of the Y SZ − M relation in this study is that they are statistically not covariant with X-ray parameters and less affected by galaxy cluster core states (Lin et al. 2015). Above our selected temperature threshold of 3 keV there are four galaxy clusters without counterpart in the Planck catalog (Hydra-A, A43, page 2 of 38 A1060, ZwCl1215, A2052). These clusters are rejected for studies that require characteristic masses or radii. Assuming spherical symmetry, the galaxy cluster masses can be transformed into r 500 values according to r 500 = 3M 500 4π · 500ρ crit,z 1/3 . (2) Surface brightness profiles Using King's (1962) analytical approximation of an isothermal sphere, measured X-ray surface brightness profiles of galaxy clusters are well described 2 by a β-model (Cavaliere & Fusco-Femiano 1976) For each component j, s 0, j is the central surface brightness, which means at projected radial distance R = 0, r c, j is the core radius of the gas distribution, and the slope β j is motivated by the ratio of the specific energy in galaxies to the specific energy in the hot gas. For galaxy clusters exhibiting a central excess emission due to the presence of cool cores, a double (N = 2) β-model can improve the agreement between model and data as one component accounts for the central excess emission while the other accounts for the overall cluster emission. However, the two components are highly degenerate and except for very nearby galaxy clusters, the ROSAT point-spread function is insufficient to resolve the core regions since the apparent size of the objects is smaller. Therefore, a single (N = 1) beta model is used to describe the galaxy cluster emission and the central excess emission is included in the background model. Simulations (Navarro et al. 1995;Bartelmann & Steinmetz 1996) indicate that the measured β values are biased systematically low if the range of radii used for fitting is less than the virial radius of the cluster. The advantage of using ROSAT PSPC data to determine the surface brightness profiles is in the large field-of-view and the low background, allowing to trace the galaxy cluster emission to relatively large radii. Wavelet decomposition We use a wavelet decomposition technique as described in Vikhlinin et al. (1998). The technique is implemented as wvdecomp task of the publicly available ZHTOOLS 3 package. The basic idea is to convolve the input image with a kernel which allows the isolation of structures of given angular size. Particular angular sizes are isolated by varying the scale of the kernel. The wavelet kernel on scale i used in wvdecomp is approximately the difference of two Gaussians, isolating structures in the convolved image of a characteristic scale of approximately 2 i−1 . The input image is convolved with a series of kernels with varying scales, starting with the smallest scale. In each step, significantly detected features of the particular scale are subtracted from the input image before going to the next scale. This allows, among other things, to decompose structures of different sizes into their components, for instance in the case of pointlike sources in the vicinity of an extended object. Wavelet kernels have the advantage of a simple linear back transformation, meaning that the original image is the sum of the different scales. We define a scale around 0.2 r 500 up to which all emission from smaller scales is classified as contamination and is included in our background modeling for the core-modeled single β-model approach. The galaxy clusters 0.2 r 500 wavelet scales are around 3−5 (2−3) for pointed (survey) observations. This corresponds to 4−16 pixel (2−4 pixel), with a pixel size of 15 (45 ). The detection threshold of a wavelet kernel convolved image is the level above which all maxima are statistically significant. Vikhlinin et al. (1998) performed Monte-Carlo simulations of flat Poisson background to define detection thresholds such that one expects on average 1/3 false detections per scale in a 512 × 512 pixel image. We adapt a slightly more stringent threshold of 5σ. Likelihood function Under the assumption that the observed counts are Poisson distributed, the maximum-likelihood estimation statistic to estimate the surface brightness profile parameters is chosen to be the Poisson likelihood. The so-called Cash statistic (Cash 1979) is derived by taking the logarithm of the Poisson likelihood function and neglecting the constant factorial term of the observed counts where M i and O i are the model and observed counts in bin i, respectively. The model counts of the background sources using wavelet decomposition, B wv,i , are not Poissonian. We assume this background component without error, meaning that just the total amount of counts show dispersion. Thus, we can add this background component to the model counts (Greiner et al. 2016). In the same way, we add an additional particle background component, B p,i , to Eq. (4) for pointed observations. Then, the likelihood function becomes A single β-plus constant background model is used to describe the surface brightness of each cluster (see Eq. (3), using N = 1 and dropping the index j) The projected radii, R i , are placed at the center of the bins. By use of the exposure map, we calculate the proper area, α i , and the vignetting corrected mean exposure time, mean,i . The model counts in each bin are then calculated by multiplying Eq. (6) with the corresponding area and exposure time Point-spread function The ability of an X-ray telescope to focus photons, in other words its response to a point source, is characterized by its pointspread function. More peaked cool-core objects are affected more by PSF effects compared to non-cool-core objects. The ROSAT PSF depends amongst others on photon energy, offaxis angle and observation mode. A detailed description of the ROSAT PSF functions is presented in Boese (2000). We use the Python package pyproffit 4 to calculate PSF mixing matrices based on Eqs. (7) and (30) of Boese (2000) for pointed and survey observations, respectively. These matrices are folded in our surface brightness profile fitting method to obtain PSF unconvolved parameters. Emission measure profiles This subsection describes our approach to obtain background subtracted self-similar scaled emission measure profiles. First, the outer significance radius and background level of each galaxy cluster are iteratively determined using the growth curve analysis method (Böhringer et al. 2000;Reiprich & Böhringer 2002). The outer significance radius determines the maximum radius out to which galaxy cluster emission is detected and thus to which radius each profile is extracted. Background-subtracted and logarithmically binned surface-brightness profiles are converted into emission measure profiles using the normalisation of a partially absorbed Astrophysical Plasma Emission Code (APEC) model 10 −14 4π[D A (1 + z)] 2 n e n H dV. The total weighted hydrogen column density (calculated with the method of Willingale et al. 2013) 5 is used to describe the absorption by the atomic and molecular Galactic column density of hydrogen. Metallicities are fixed to 0.35 Z and the abundance table compiled by Anders & Grevesse (1989) is used. The emission measure along the line-of-sight, is self-similar scaled according to Arnaud et al. (2002) and T ∝ (E(z)M 500 ) 2/3 (10) by where ∆ z is calculated using the density contrast, ∆ c , and matter density parameter at redshift z, Ω z = Ω m (1+z) 3 /E(z) 2 , according to Under the assumption that the cluster has just virialized, Bryan & Norman (1998) derived an analytical approximation of ∆ c for a flat universe from the solution to the collapse of a spherical top-hot perturbation with w = Ω z − 1. Scaling relations In this subsection we describe the basic principle of our linear regression routine to obtain scaling relations. A set of two variates, x/y, is fitted by a power-law relation according to log y/n y = m · log x/n x + b. (14) The pivot elements, n x/y , are set to the median along a given axis, such that the results of the slope and normalisation are approximately uncorrelated. 4 https://github.com/domeckert/pyproffit 5 http://www.swift.ac.uk/analysis/nhtot/index.php Likelihood function Linear regression of the scaling relations is performed using a Markov chain Monte Carlo (MCMC) posterior sampling technique. We adapt an N dimensional Gaussian likelihood function extended compared to Kelly (2007) to account for intrinsic scatter correlation. The intrinsic scatter tensor, Λ, is described in more detail in Sect. 3.4.2. The uncertainty tensors Σ n account for measurement errors in the independent and the dependent variables andr n denote the residual vectors. For illustration purposes, this is how these two objects would look like in a bivariate example: In this study, the correlation between different measurement errors in the uncertainty tensor is set to zero. The "true" coordinatex n is normal-scattered according to the intrinsic scatter tensor via We integrate out, which means that we marginalize over,x n andŷ n . The scatter along the independent axis, λ x , of the intrinsic scatter tensor is fixed to avoid degeneracies. This means that for this study the intrinsic scatter in temperature is fixed to 20%, that is λ T = 0.11 (Kravtsov et al. 2006). The correlation between the intrinsic scatter values of the two variates x and y, λ xy , is of particular interest for this study and will be described in more detail in Sect. 3.4.2. We use the emcee algorithm and implementation (Foreman-Mackey et al. 2013) for optimization. A chain is considered as converged when the integrated autocorrelation time is greater than one-hundredth of the chain length. Covariance The linear relationship and thus the joint variability between two or more sets of random variables can be quantified by the covariance between those variates. In the simple case of two variables x and y, each with a sample size of N and expected valuesx and y, the covariance is given by The degree of correlation can be calculated by normalizing the covariance to the maximum possible dispersion of the single standard deviations λ x and λ y , the so-called Pearson correlation coefficient: The Pearson correlation coefficient can take values between −1 and +1, where 0 means no linear correlation and +1 (−1) means total positive (negative) linear correlation. In the general case of n sets of variables {X 1 }, . . . , {X n }, the covariances can be A43, page 4 of 38 displayed in a matrix, where the first-order covariance matrix is defined by In the previous example of two variables x and y, the covariance matrix reads The latter equality makes use of Eq. (19), which implies that the covariance of a variate with itself, that is cov(x, x), reduces to the variance of x or the square of the standard deviation of x. The off-axis elements are rewritten by solving Eq. (20) for the covariance and using the symmetry λ xy = λ yx . Calculating the Pearson correlation coefficient between the ranked variables is a non-parametric measure of a monotonic relationship between the variables and is called the Spearman rank-order correlation coefficient. Selection effects As already discussed in Sect. 1, centrally peaked galaxy clusters are more likely to enter an X-ray selected sample due to their enhanced central emission. Mittal et al. (2011) investigated this effect by applying the HIFLUGCS flux limit to Monte Carlo simulated samples. Assuming HIFLUGCS being complete, one can vary the input fractions of different core-types in the simulations to match the observed ones. The intrinsic scatter increases the normalization of the luminosity-temperature relation because up-scattered clusters have a higher chance of lying above the flux threshold. In this study, we are not trying to determine the true luminosity-temperature relation but are interested in the residuals of the sample with respect to the mean to study the intrinsic scatter covariances. Therefore, we are neglecting Malmquist bias in the parameter optimization, although it is present in HIFLUGCS. To investigate the effect of Malmquist bias on the best-fit shape-temperature relation parameters and the intrinsic scatter correlation coefficients, we artificially decrease the luminosity-temperature relation normalization and find that the differences are insignificant. 3.5. Cool-core classification Hudson et al. (2010) used HIFLUGCS to compare 16 different techniques to differentiate cool-core and non-cool-core clusters. The central cooling time, t cool , was found to be suited best and used to classify clusters into three categories. Clusters with central cooling times shorter than 1 Gyr are classified as strongcool-core (SCC) clusters. They usually show characteristic temperature drops toward the center and low central entropies. Clusters exhibiting high central entropies and cooling times greater than 7.7 Gyr are classified as non-cool-core (NCC) clusters. In intermediate class with cooling times in between those of SCC and NCC clusters are classified as weak-cool-core (WCC) clusters. We adapt the Hudson et al. (2010) classification scheme and categorization of HIFLUGCS clusters for this study. There are 45 galaxy clusters above our selected temperature threshold of 3 keV with mass estimates in the Planck "Union catalog". The amount of each core-type category is 15, 16 and 14 for SCC, WCC and NCC, respectively. For one of the SCC and three of the WCC objects, no ROSAT pointed observations are available and RASS data is used. Emission measure profiles This section describes a model-independent way to test the surface brightness (SB) behavior between different core-type populations by comparing their background subtracted self-similar scaled emission measure (EM) profiles. We show the SB and self-similar scaled EM profiles in Fig. 1. Outside the cluster core the galaxy clusters show self-similar behavior, as already discussed in several previous studies (e.g., Vikhlinin et al. 1999). The self-similar scaled EM profiles show a smaller intrinsic scat- ter compared to the surface brightness profiles in the 0.2−0.5 r 500 range. It is reduced by 28% (33%) with respect to the median (weighted mean). To investigate possible differences between the core-type populations, the EM profiles are stacked according to their core types by calculating the weighted mean and the median, as shown in the top panel of Fig. 2. The bottom panel of Fig. 2 shows the ratio of the average EM profiles sorted according to their core types. The statistical errors on the weighted means or medians are very small. The error bars indicated on the plot correspond to uncertainties calculated by bootstrapping the data, that is by shuffling the profiles with repetition, repeating the operation 10 000 times and computing the median and percentiles of the output distribution. The bootstrap errors thus include information on the sample variance and non-Gaussianity of the underlying distribution. The weighted mean profiles reveal in a model-independent way the existence of subtle differences between the galaxy cluster populations. The amplitude of this effect between SCC and NCC clusters in the 0.2−0.5 r 500 radial range is up to 30% and confirms the finding of Eckert et al. (2012). Compared to the heterogeneous sample of Eckert et al. (2012) HIFLUGCS is statistically complete, which confirms the result in a more robust way. If true, this finding implies that the outskirts are affected by the core-type and a detection algorithm tailored to the galaxy cluster outskirts will be more sensitive to the more abundant NCC objects, which needs to be taken into account in selection functions. In addition one could determine the statistical likelihood for the core-state of a particular galaxy cluster. However, the asymmetric bootstrap errors indicate an underlying non-Gaussian distribution or that the sample is affected by outliers. The median profiles, which are more robust against outliers, do not reveal the same trend of the emission measure ratios. As a test, we exclude the strong cool-core profile that deviates most with respect to the median within 0.2−0.5 r 500 (Abell 3526). This cluster also shows the smallest statistical errors in this radial range. The weighted mean ratios resembles well the trend of the median when excluding this single cluster from the analysis. This reveals that the weighted mean is driven by an outlier with small statistical errors. Therefore, we conclude that there is no indication of a systematic core-type differences in the galaxy clusters 0.2−0.5 r 500 radial range. Nevertheless, this comparison is useful because outliers like Abell 3526 will affect the selection function. The investigation of a possible redshift evolution of this analysis is left to a future study. Beyond 0.5 r 500 the difference between SCC and NCC clusters become larger. Eckert et al. (2012) discussed gas redistribution between the core region and the outskirts as possible explanation. In this scenario, the injected energy due to a merging event flattens the density profile of interacting objects. Assuming that NCC clusters are more likely to have experienced a recent merging event, their self-similar scaled EM profiles would be different compared to CC objects. Another explanation could be the current accretion of large-scale blobs. Clusters with higher mass accretion rates show a larger fraction of non-thermal pressure in simulations (Nelson et al. 2014). Again assuming that NCC objects are merging clusters, the discrepancies can be explained by the different non-thermal energy content. However, this scenario seems unlikely since we expect to detect such structures in our wavelet images. An additional explanation can be that the dark matter halos of NCC and CC objects have different shapes and thus a different concentration at a given radius. Large-scale center and ellipticity The wavelet decomposition allows us to study galaxy cluster parameters on large scales. The one scale around 0.2 r 500 is used to determine the center and ellipticities using the SExtractor program (Bertin & Arnouts 1996). This minimizes the impact of the different core states on these parameters. The chosen scale contains most of the cluster counts outside the core region and is therefore a good tracer for large-scale properties. Including scales up to around 0.5 r 500 (an additional 1−3 scales) shows that the median and mean difference in ellipticity is just about 10%. The ellipticity, e, of an ellipse with major-axis, e 1 , and minoraxis, e 2 , is defined as and is shown as function of the cooling time in Fig. 3. The two parameters do not show a significant correlation (Spearman rank-order correlation coefficient of 0.17). The medians and sample standard deviations in the three bins are (0.24, 0.28, 0.25) and (0.08, 0.12, 0.14), respectively. There are no highly elliptical clusters with short cooling times. Therefore, selecting clusters above an ellipticity of approximately 0.3 creates a sample without SCC objects. The universality of this result needs to be confirmed with galaxy cluster samples of larger sizes. We perform a similar stacking analysis as in Sect. 4, where the sample is divided into 3 sub-classes according to ellipticity, rather than core-state. The weighted mean and median profiles are shown in Fig. 4. The median profiles do not show a difference between the sub-classes except in the core region and the very outskirts. We quantify the covariance between ellipticity and core radius in kpc. For a core-modeled single β-model and fixed β-parameter, the Spearman rank-order correlation coefficient is 0.20, which is no strong indication for a correlation. Allowing the β-parameter to vary introduces a small positive correlation coefficient of 0.33. We see a similar behavior for best-fit core radii using a single β-model. Clusters with large cooling times show a large range of 2D ellipticities. Assuming that they originate from the same population in 3D, the large scatter of ellipticity might be explained by projection effects, meaning that triaxial halos with random orientations will yield a wide range of observed 2D ellipticities. The verification of this effect using simulations is left to a future study. Considering ellipticity adds supplementary information since it is not significantly correlated with the core radius. The ellipticity in the galaxy cluster outskirts traces the amount of baryon dissipation (Lau et al. 2012). Thus, ellipticity is linked and may help to constrain cosmological parameters like the amplitude of the matter power spectrum, σ 8 . Halos form later for lower σ 8 values and are therefore on average more elliptical (Allgood et al. 2006;Macciò et al. 2008). In addition, measuring ellipticity on several scales allows to indirectly study largescale gas rotation in galaxy clusters (Bianconi et al. 2013). This allows us to test if the ICM is in hydrostatic equilibrium, an often made assumption in X-ray mass measurements. This makes the ellipticity an interesting survey measure for the future eROSITA X-ray all-sky survey, where we expect a range of ellipticities. The ratio of the weighted mean profiles between objects with low and high ellipticity shows a slight offset. Since this offset is not visible in the median profile, it cannot solely be explained by the larger non-Gaussianity per bin of elliptical surface brightness distributions extracted in circular annuli. Hashimoto et al. (2007) studied the relationships between the X-ray morphology and several other cluster properties using a heterogeneous sample of 101 clusters taken from the Chandra archive, out of which 18 objects are represented in HIFLUGCS. The ellipticity measurements are in agreement with our study within a factor of approximately two, except for A0399. For this cluster, the ellipticity calculated by Hashimoto et al. (2007) is a factor of 5.6 larger (0.284 compared to 0.051). We note that the semi-major and semi-minor axis are calculated in the same way, that is by the 2nd-order moments but we subtracted the central excess emission of the cluster before. Including the central excess emission brings our results in better agreement with Hashimoto et al. (2007), especially in the case of A0399. For a single β-model description of the surface brightness profile, they find a slightly larger Spearman rank-order correlation coefficient of 0.37, compared to 0.26, between ellipticity and core radius in kpc. Analysis of the residuals In this section we quantify how well the different model parameterizations represent the underlying HIFLUGCS surface brightness profiles. All profiles are fitted over the full extracted radial range (see Fig. D.1) using an MCMC posterior sampling technique, taking uncertainties in the measured surface brightness into account. In case of the core-modeled single β-model all contaminating sources, including the excess emission in the cluster core, are modeled according to Eq. (5). For the single β-model fitting procedure we perform a classical approach of masking contaminating sources that are detected on the wavelet images. This includes point-like sources or extended emission like substructure but not emission from the cluster core. Thus the likelihood functions for the two cases are different because for a single β-model, B wv,i in Eq. (5) is equal to zero for all i. The choice of priors is discussed in Appendix B. The background is modeled with an additive constant. We note that not subtracting the background before fitting introduces a positive degeneracy between the best-fit slope of the β-model and the background level. In this framework, residuals are defined as (data-model)/model, meaning that positive (negative) residuals indicate that the model under-(over-) predicts the observed surface brightnesses. We focus particularly on the 0.2−0.5 r 500 radial range because most of the galaxy cluster counts outside the non-scalable core regions are expected there. Discussion: Single β-models A single β-model is a widely used description to fit surface brightness profiles, especially in the low statistics regime. Therefore, it is also commonly used to detect extended sources in X-ray surveys. In Fig. 5 (top left panel), we show the fractional median of the residuals from a single β-model in the 0.2−0.5 r 500 radial range as function of core radius. There is a clear trend of positive residuals toward smaller core radii. This means that the flux in the outskirts is systematically under-predicted, especially for SCC objects (see Sect. 6.3). In addition, the core radii of objects that exhibit cool-cores are systematically biased low since a single β-model lacks degrees of freedom to model the central excess emission. Due to higher photon statistics in the core region, a single β-model fit tends to be driven by the inner radial bins. As an additional test we reduce the weight of the core region in the fitting procedure by assuming that the variance in the Gaussian likelihood function is underestimated by a given fractional amount f = 0.1. The qualitative behavior of the residuals remains the same. In cases where the cluster outskirts are poorly fitted, the background level is not determined properly as well because its level compensates for the poor fit in the outskirts. This influences the β value determination as already discussed. The median of the β parameters is 0.59 and thus smaller than the often assumed generic value of 2/3. The best-fit values of the surface brightness parameters are in good agreement with Reiprich & Böhringer (2002). The single β-model residuals for individual clusters have a wavy form. We investigate different functional forms to describe the common deviations from a single β-model, but no significant common form of the residuals could be found. The scatter around zero and bias in the 0.2−0.5 r 500 radial bin are 0.092 and 0.032 ± 0.003, respectively. The non-zero bias reflects the same finding as discussed above. The residuals in the 0.2−0.5 r 500 radial range from a single β-model with fixed β parameter (β = 2/3) is shown in the bottom left panel of Fig. 5. The amplitude of the residuals for SCC objects increases up to over 40%. Fixing the β parameter increases the scatter and bias to 0.161 and 0.066 ± 0.003, respectively. Discussion: Core-modeled single β-models The negative effects of a single β-model as described in Sect. 6.1 are reduced when modeling the excess core emission by adding the counts on the small scales of the wavelet decomposition to the model counts in the Poisson likelihood function (see Sect. 3.1.2). The residuals in the 0.2−0.5 r 500 radial range are shown in Fig. 5 (top right panel). The measured residuals scatter around zero and there is no bias between individual core types visible. The scatter around zero in the 0.2−0.5 r 500 radial bin is slightly reduced compared to the single β-model (0.074). Most importantly, the bias gets more consistent with zero (−0.004 ± 0.003). In case of a fixed β parameter (bottom right panel of Fig. 5), the bias of a core-modeled single β-model is consistent with zero (−0.001 ± 0.003) and the scatter is slightly increased (0.087). The median of the β parameters is larger compared to the single β-model case and with 0.696 close to the generic value of 2/3. To verify our core-modeling method and to confirm its results we excise the core region (<0.2 r 500 ) in the single β-model fitting procedure. This is an independent way to avoid the single β-model fit to be driven by the non-scalable core emission and to reduce residuals between model and data in the cluster outskirts. In more than 90% of the cases, we find that this approach delivers comparable best-fit parameters for β and core-radii as when modeling the core emission. Due to excluding data, the constraining power is reduced and the degeneracy of the shape parameters with the β-model normalization is larger. In several cases, this results in a larger mismatch of the β-model fluxes compared to the real flux of the objects. corresponding radial range. For each model, we calculate the cluster count rate by integrating a single β-model with the corresponding best-fit parameters in a given radial range, that is 0 − r x (the outer significance radius) for the overall model flux and 0.2−0.5 r 500 for the flux in the outskirts. We are interested in flux ratios, in which the count rate to flux conversion factors of individual clusters cancel each other out. We calculate the total flux of the single β-model including the central excess emission and subtract the wavelet-detected central excess emission in the coremodeled case. The median of the measured fluxes in the outskirts and the overall fluxes of the NCC and WCC objects agree very well with each other. The measured overall single β-model fluxes of the SCC objects are on average approximately 23% larger compared to the measured overall core-modeled fluxes and the flux ratio has an intrinsic scatter of approximately 14% around the median. The median accuracy between model and measured total flux is within approximately 4% (2%) for the (core-modeled) single β-model, regardless of the core-type or if the β value is free to vary or fixed to 2/3. The flux in the outskirts of the core-modeled single β-model has the same median accuracy of 2%. For the single β-model, the accuracy stays at the 4% level for WCC and NCC objects. For SCC objects, for the single beta-model case, the flux in the outskirts is biased low by 6% and increases to 10% when β is fixed to 2/3; meaning that the bias is 3−5 times larger than for the core-modeled case. In all cases, the intrinsic scatter values of the ratios are below 6%. The Spearman rank-order correlation coefficients do not reveal a significant correlation between the flux underestimation and the cluster temperature. Scaling relations This section describes how the shape parameters scale with temperature and how they correlate with luminosity. In Fig. 6, we show the core radius as a function of temperature, where the β parameter is fixed to 2/3 to avoid degeneracies between the surface brightness parameters. We note that the overall picture does not change when fixing β to the median of the full population or the medians of the individual core-type populations. To account for a possible temperature dependence, the core radii are divided by the square-root of the corresponding cluster temperatures (see below for more details). The figure emphasizes the systematic differences between the individual core types in the modeling with a single β-model. The discrepancies get less prominent when modeling the core region using the wavelet decomposition. In addition, the intrinsic scatter is reduced by 8%, 11%, and 35% for NCC, WCC, and SCC objects, respectively. In both modeling cases, the Spearman rank-order correlation coefficients indicate We determine scaling relations as outlined in Sect. 3.4. The best-fit relations between shape parameters and temperature (Fig. 7), as well as luminosity and temperature are determined A43, page 10 of 38 Single β-model simultaneously. This allows studying the covariances between shape and luminosity from the joint fit. The best-fit values are shown in Tables 1 and 2. In Fig. 8, we show the correlation between the shape parameters and luminosity. The Spearman rank-order correlation coefficients between luminosity and β, as well as luminosity and core radius are 0.37 and 0.12 (0.32 and 0.22) for the (core-modeled) single β-model, respectively. When the β value is fixed to 2/3, the luminosity-core radius correlation coefficients become mildly negative with −0.23 (−0.22). The best-fit parameters of the single β-model and coremodeled single β-model scaling relations agree approximately on the 1σ level, except that in the latter case the intrinsic scatter value of the core radius is reduced by almost a factor of two. In both modeling approaches the shape parameters show a positive correlation with temperature. The shape parameters of galaxy clusters are often fixed to generic values (e.g., β = 2/3) or scaling relations (e.g., r c ∝ r 500 , Pacaud et al. 2018). The assumption that the core-radius is proportional to r 500 , together with Eq. (10) results in a self-similar scaling of r c ∝ T 1/2 . We find a core-radius-temperature relation with a marginally steeper slope of 1.04 ± 0.37 (0.75 ± 0.20) for the (core-modeled) single β-model compared to this expectation. Fixing the value of β to 2/3 results in 0.77 ± 0.14 (0.50 ± 0.16), consistent with the self-similar value when modeling the excess core emission. We study the correlation coefficients (Table 2) between the different galaxy cluster parameters by simultaneously fitting for the scaling relations and the intrinsic scatter tensor. We expect some degeneracy between the single β-model parameters in the fitting, amongst others that a larger core-radius is compensated by a steeper slope. This is reflected in the strong positive correlation between core-radius and β. We do not find a significant correlation between β and luminosity. There is a strong negative correlation between core radius and luminosity, meaning that at a given temperature, more luminous objects tend to be more compact. For all modeling cases, the best-fit correlation coefficients are consistent on a 1σ level. This implies, that the measured correlation between core radius and luminosity is not significantly affected by modeling the central excess emission. Neither this covariance nor the shape-temperature relations are taken into account in existing simulations for eROSITA but may play a crucial role in understanding selection effects related to the detection of clusters in X-ray surveys, which is a key ingredient for using X-ray galaxy clusters as a precision cosmological probe. The findings presented here can be used to perform more realistic simulations and a comparison between different sets of simulations allows to study the impact of these covariances on obtained cosmological parameters. Summary X-ray morphologies of galaxy clusters play a crucial role in the determination of the survey selection function. We compare selfsimilar scaled emission measure profiles of a well defined galaxy cluster sample (HIFLUGCS) above a representative temperature threshold of 3 keV. One outlier (Abell 3526) with small statistical errors drives the weighted mean profiles of sub-populations according to different core properties toward a different behavior of strong cool-core and non-cool-core objects in the 0.2−0.5 r 500 radial range. Excluding this object from the analysis or calculating the median profiles reveals no systematic difference in the aforementioned radial range. We conclude that there is no indication for a correlation between the behavior in the 0.2−0.5 r 500 radial range and the core state, although the overall shapes of the SCC and NCC populations are different. The median SCC profile shows a larger normalization toward the center and is steeper compared to the median NCC profile. This leads to a turnover of the profile ratio at approximately 0.3 r 500 . The difference in the center can be explained by the core state but the difference in the outskirts is still under debate. As discussed in Sect. 4 possible explanations are gas redistribution between the core region and the outskirts, the current accretion of large-scale blobs, or different shapes of the dark matter halos leading to different concentrations at a given radius. Characterizing galaxy cluster surface brightness profiles with single β-models is still state-of-the-art in the determination of selection functions. We investigate the residuals of a single β-model fit to the overall cluster profile, revealing that this description tends to underestimate the flux in the galaxy cluster outskirts for less extended clusters. Fixing the β parameter to 2/3 increases this effect dramatically, that is up to over 40%. In both cases, the core-radius measurement for SCC objects are biased low. In addition, the intrinsic scatter values with respect to the medians of the self-similar scaled extent parameters show a more than 1σ tension between strong and non-cool-core objects. These three effects can be minimized by adapting a wavelet decomposition based surface brightness modeling that is sensitive to the galaxy cluster outskirts and models the excess emission in the core region. Then, the fit is not driven by the local processes in the core. Compared to a single β-model approach the residuals in the 0.2−0.5 r 500 radial range are much smaller and the core radii depend much more mildly on the core state. Our method to model the excess core emission has very interesting applications for future galaxy cluster surveys, for example with eROSITA. The performance study for high redshift objects with small angular extent and establishing the most robust method for clusters where r 500 is unknown is left to a future work. Using wavelet decomposition allows to determine large-scale ellipticities of the clusters. The ellipticity is an interesting new survey measure for eROSITA since it's determination does not require many photon counts and it adds additional information to the β-model shape parameters and coreexcised luminosities. A detailed study of measuring galaxy cluster ellipticities with eROSITA and it's implications is left to a future work. We study how shape parameters and luminosity scale with temperature. There is no significant difference of the best-fit values between a single β-model and core-modeled single β-model, except that the intrinsic scatter of the core radius is almost twice as large for the single β-model case. The slope of the core radiustemperature relation is steeper than the self-similar prediction of 1/2 but gets in agreement when fixing the β parameter to 2/3 in A43, page 12 of 38 the surface brightness profile modeling. More interestingly, the shape parameters are covarient with luminosity, meaning that at a given temperature, more compact objects are more luminous. These covariances are usually neglected in simulations to determine the survey selection function (Pacaud et al. 2007;Clerc et al. 2018). In addition, these previous studies assumed a fixed β value, while we find that β is a function of temperature. Taking shape-temperature scaling relations and shape-luminosity covariances into account will lead to a more realistic set of simulated galaxy clusters and will provide a better understanding of the survey completeness. In this section we compare our previous emission measure ratio results of Sect. 4 to profiles that are re-scaled by a characteristic radii according to hydrostatic mass estimates by Schellenberger & Reiprich (2017). We adapt their preferred "NFW Freeze" model, where a NFW profile (Navarro et al. 1996(Navarro et al. , 1997) is fit to the outermost measured mass profiles of Chandra observations and a concentration-mass relation is used to reduce the degrees of freedom. We are interested in the difference between individual core-types and not in the bias between the Planck and hydrostatic masses. Therefore, we assume that the bias is constant for all clusters. To probe the masses at the same radii, we recalculate the hydrostatic masses at the Planck r 500 values according to Formula ( where c 500 denotes the NFW concentration parameter and Y(u) = ln(1 + u) − u/(1 + u). These recalculated masses are on average lower than the corresponding masses in the Planck catalog. The median of the NCC cluster masses is approximately 13% larger than that of the SCC objects (see Fig. A.1), increasing the SCC to NCC weighted mean ratio effect in the 0.2−0.5 r 500 range to 40%. This increase due to the dynamical states is also seen in simulations, for example in different fractions of nonthermal pressure for different mass accretion rates (Nelson et al. 2014). The differences can thus be explained by assuming that NCC clusters are merging objects and that they contain more non-thermal energy. An alternative explanation is that the differences come from the assumed concentration-mass relation. We expect different core-types to have different shapes of the dark matter halo and therefore different concentrations at a given radius. However, fitting for the concentration in Schellenberger & Reiprich (2017) results in some cases to unrealistic high or low masses, potentially because of limited radial coverage due to the relatively small Chandra field-of-view. Appendix B: Priors Scaling relation intercept non-informative uniform λ y Intrinsic scatter in y pos-normal(0,10) λ xy Correlation coefficient uniform[−1,1] between x and y Notes. The prior of the scaling relation slope is assumed to be uniform in sin(Θ) (VanderPlas 2016), with Θ being the angle between the bestfit line and the x-axis. The term "pos-normal" refers to a probability distribution that follows an ordinary normal distribution but is set to zero for negative parameter values, meaning that the parameter is restricted to be positive. We list the priors used for this analysis in Table B.1. They are chosen to be weakly-or non-informative and varying them does not influence the results of this study significantly. A43, page 14 of 38 (2) and (3) list the equatorial coordinates of the cluster center in decimal degrees based on the large scale wavelet image. Column (4) gives the offset to an iteratively determined two-dimensional "center of mass" using an aperture radius of 3 (Reiprich & Böhringer 2002). Column (5) and (6) list the cluster redshift (Reiprich & Böhringer 2002) and core-excised temperature (Hudson et al. 2010), respectively. Columns (7) and (8) give the β-model slope and core radius for a core-modelled fit. Column (9) lists the luminosity in the 0.1−2.4 keV energy range. The cool-core classification according to Hudson et al. (2010) is given in column (10). Column (11) lists the characteristic radius where the density corresponds to 500 times the critical density. Galaxy clusters, whose r 500 value is marked with a † don't have SZ mass estimates (Planck Collaboration XI 2016) and the Schellenberger & Reiprich (2017) mass estimate is used to determine the wavelet small scales. These clusters are excluded from further analysis steps which comprise characteristic radii. Column (12) gives the measured large scale ellipticity. The physical to angular scale conversion at the cluster redshift is given in col. (13).
11,929.4
2019-07-08T00:00:00.000
[ "Physics" ]
Topological Symmetry Groups for Small Complete Graphs For each $n\leq 6$, we characterize all the groups which can occur as either the orientation preserving topological symmetry group or the topological symmetry group of some embedding of $K_n$ in $S^3$. pieces, this molecule is achiral though it cannot be rigidly superimposed on its mirror form. A detailed discussion of the achirality of this molecule can be found in [3]. study the symmetries of arbitrary graphs embedded in 3-dimensional space, whether or not such graphs 30 represent molecules. In fact, the study of symmetries of embedded graphs can be seen as a natural 31 extension of the study of symmetries of knots which has a long history. 32 Though it may seem strange from the point of view of a chemist, the study of symmetries of embedded 33 graphs as well as knots is more convenient to carry out in the 3-dimensional sphere S 3 = R 3 ∪ {∞} 34 rather than in Euclidean 3-space, R 3 . In particular, in R 3 every rigid motion is a rotation, reflection, S 3 = R 3 ∪ {∞} and extend g to a homeomorphism of S 3 simply by fixing the point at ∞. It follows 48 that the topological symmetry group of Γ in S 3 is the same as the topological symmetry group of Γ in 49 R 3 . Thus we lose no information by working with graphs in S 3 rather than graphs in R 3 . 50 It was shown in [9] that the set of orientation preserving topological symmetry groups of 3-connected 51 graphs embedded in S 3 is the same up to isomorphism as the set of finite subgroups of the group of 52 orientation preserving diffeomorphisms of S 3 , Diff + (S 3 ). However, even for a 3-connected embedded graph Γ, the automorphisms in TSG(Γ) are not necessarily induced by finite order homeomorphisms of 54 (S 3 , Γ). 55 For example, consider the embedded 3-connected graph Γ illustrated in Figure On the other hand, Flapan proved the following theorem which we will make use of later in the paper. [4] Let ϕ be a non-trivial automorphism of a 3-connected graph γ which is 63 induced by a homeomorphism h of (S 3 , Γ) for some embedding Γ of γ in S 3 . Then for some embedding 64 Γ ′ of γ in S 3 , the automorphism ϕ is induced by a finite order homeomorphism, f of (S 3 , Γ ′ ), and f is 65 orientation reversing if and only if h is orientation reversing. 66 In the definition of the topological symmetry group, we start with a particular embedding Γ of a 67 graph γ in S 3 and then determine the subgroup of the automorphism group of γ which is induced 68 by homeomorphisms of (S 3 , Γ). However, sometimes it is more convenient to consider all possible 69 subgroups of the automorphism group of an abstract graph, and ask which of these subgroups can be the 70 topological symmetry group or orientation preserving topological symmetry group of some embedding 71 of the graph in S 3 . The following definition gives us the terminology to talk about topological symmetry 72 groups from this point of view. Topological symmetry groups of complete graphs For the special class of complete graphs K n embedded in S 3 , Flapan, Naimi, and Tamvakis topological symmetry groups are possible for embeddings of a particular complete graph K n in S 3 . For 89 each n > 6, this question was answered for orientation preserving topological symmetry groups in the 90 series of papers [2,[6][7][8]. 91 In the current paper, we determine both the topological symmetry groups and the orientation 92 preserving topological symmetry groups for all embeddings of K n in S 3 with n ≤ 6. Another way 93 to state this is that we determine which groups are realizable and which groups are positively realizable 94 for each K n with n ≤ 6. This is the first family of graphs for which both the realizable and the positively 95 realizable groups have been determined. 96 For n ≤ 3, this question is easy to answer. In particular, since K 1 is a single vertex, the only 97 realizable or positively realizable group is the trivial group. Since K 2 is a single edge, the only realizable 98 or positively realizable group is Z 2 . 99 For n = 3, we know that Aut(K 3 ) ∼ = S 3 ∼ = D 3 , and hence every realizable or positively 100 realizable group for K 3 must be a subgroup of D 3 . Note that for any embedding of K 3 in S 3 , 101 the graph can be "slithered" along itself to obtain an automorphism of order 3 which is induced 102 by an orientation preserving homeomorphism. Thus the topological symmetry group and orientation 103 preserving topological symmetry group of any embedding of K 3 will contain an element of order 3. Thus 104 neither the trivial group nor Z 2 is realizable or positively realizable for K 3 . If Γ is a planar embedding 105 of K 3 in S 3 , then TSG(Γ) = TSG + (Γ) ∼ = D 3 . Recall that the trefoil knot 3 1 is chiral while the knot 106 8 17 is negative achiral and non-invertible. Thus if Γ is the knot 8 17 , then no orientation preserving 107 homeomorphism of (S 3 , Γ) inverts Γ, but there is an orientation reversing homeomorphism of (S 3 , Γ) 108 which inverts Γ. Whereas, if Γ is the knot 8 17 #3 1 , then there is no homeomorphism of (S 3 , Γ) which 109 inverts Γ. Determining which groups are realizable and positively realizable for K 4 , K 5 , and K 6 is the main 111 point of this paper. In each case, we will first determine the positively realizable groups and then use the In addition to the Complete Graph Theorem given above, we will make use of the following results 115 in our analysis of positively realizable groups for K n with n ≥ 4. It was shown in [7] that adding a local knot to an edge of a 3-connected graph is well-defined and that 126 any homeomorphism of S 3 taking the graph to itself must take an edge with a given knot to an edge with 127 the same knot. Furthermore, any orientation preserving homeomorphism of S 3 taking the graph to itself 128 must take an edge with a given non-invertible knot to an edge with the same knot oriented in the same 129 way. Thus for n > 3, adding a distinct knot to each edge of an embedding of K n in S 3 will create an 130 embedding ∆ where TSG(∆) and TSG + (∆) are both trivial. Hence we do not include the trivial group 131 in our list of realizable and positively realizable groups for K n when n > 3. 132 Finally, observe that for n > 3, for a given embedding Γ of K n we can add identical chiral knots 133 (whose mirror image do not occur in Γ) to every edge of Γ to get an embedding Γ ′ such that TSG(Γ ′ ) = 134 TSG + (Γ). Thus every group which is positively realizable for K n is also realizable for K n . We will use 135 this observation in the rest of our analysis. Topological Symmetry Groups of K 4 137 The following is a complete list of all the non-trivial subgroups of Aut(K 4 ) ∼ = S 4 up to isomorphism: We will show that all of these groups are positively realizable, and hence all of the groups will also 140 be realizable. First consider the embedding Γ of K 4 illustrated in Figure 3. Table 2. order, then ϕ fixes at most 2 vertices. 165 We now prove the following lemma. We summarize our results on positive realizability for K 5 in Table 3. In order to prove D 4 is realizable for K 5 consider the embedding Γ illustrated in Figure 9. Every We obtain a new embedding Γ ′ by replacing the invertible 4 1 knots in Figure 9 with the knot 12 427 , 216 which is positive achiral but non-invertible [14]. Since 12 427 is neither negative achiral nor invertible, no 217 homeomorphism of (S 3 , Γ ′ ) can invert 1234. Thus TSG(Γ ′ ) ∼ = Z 4 . 218 Next let Γ denote the embedding of K 5 illustrated in Figure 10. Every homeomorphism of (S 3 , Γ) 226 It is more difficult to show that Z 5 ⋊ Z 4 is realizable for K 5 , so we define our embedding in two steps. 227 First we create an embedding Γ of K 5 on the surface of a torus T that is standardly embedded in S 3 . 228 In Figure 11, we illustrate Γ on a flat torus. Let f denote a glide rotation of S 3 which rotates the torus 229 longitudinally by 4π/5 and while rotating it meridinally by 8π/5. Thus f takes Γ to itself inducing the 230 automorphism (12345). 231 Figure 11. The embedding Γ of K 5 in a torus. S 3 about a (1, 1) curve on the torus T , followed 232 by a reflection through a sphere meeting T in two longitudes, and then a meridional rotation of T by 6π/5. In Figure 12, we illustrate the step-by-step action of g on T , showing that g takes Γ to itself 234 inducing (2431). The homeomorphisms f and g induce the automorphisms φ = (12345) and ψ = (2431) respectively. 238 In order to obtain the group Z 5 ⋊ Z 4 , we now consider the embedding Γ ′ of K 5 whose projection on a 239 torus is illustrated in Figure 13. Observe that the projection of Γ ′ in every square of the grid given by Γ Recall that g was the homeomorphism of (S 3 , Γ) obtained by rotating S 3 about a (1, 1) curve on the 243 torus T , followed by a reflection through a sphere meeting T in two longitudes, and then a meridional 244 rotation of T by 6π/5. In order to see what g does to Γ ′ , we focus on the square 1534 of Γ ′ . Figure 14 245 illustrates a rotation of the square 1534 about a diagonal, then a reflection of the square across a longitude. 246 The result of these two actions takes the projection of the knot 1534 to an identical projection. Thus after 247 rotating the torus meridionally by 6π/5, we see that g takes Γ ′ to itself inducing the automorphism 248 ψ = (2431). It now follows that Z 5 ⋊ Z 4 ≤ TSG(Γ ′ ) ≤ S 5 . 249 In order to prove that TSG(Γ ′ ) ∼ = Z 5 ⋊ Z 4 , we need to show TSG(Γ ′ ) ∼ = S 5 . We prove this by 250 showing that the automorphism (15) cannot be induced by a homeomorphism of (S 3 , Γ ′ ). Thus every subgroup of Aut(K 5 ) is realizable for K 5 . Table 4 summarizes our results for TSG(K 5 ). The following is a complete list of all the subgroups of Aut(K 6 ) ∼ = S 6 : and independently verified using the program GAP). Observe that every homeomorphism of (S 3 , Γ) takes the pair of triangles 123 ∪ 456 to itself, since 294 The subgroup f, g, ψ is isomorphic to D 3 × Z 3 because ψ commutes with f and gψ = ψg −1 . We 295 add the non-invertible knot 8 17 to every edge of the triangles 123 and 456 to obtain an embedding Γ 1 . 296 Now the automorphism φ = (45)(12) cannot be induced by an orientation preserving homeomorphism 297 of (S 3 , Γ 1 ). However, f , g, and ψ are still induced by orientation preserving homeomorphisms. Thus with Γ in Figure 19, we place 5 2 knots on the edges of the triangle 123 so that ψ is no longer induced. 301 Thus creating and embedding Finally f, g is isomorphic to Z 3 × Z 3 . If we place equivalent non-invertible knots on each edge of 304 the triangle 123 and a another set (distinct from the first set) of equivalent non-invertible knots on each 305 edge of 456 we obtain an embedding Γ 3 with TSG + (Γ 3 ) ∼ = Z 3 ×Z 3 since Z 3 ×Z 3 is a maximal subgroup 306 of (Z 3 × Z 3 ) ⋊ Z 2 . For the next few groups we will use the following lemma. 335 4-Cycle Theorem. [5] For any embedding Γ of K 6 in S 3 , and any labelling of the vertices of K 6 by the 336 numbers 1 through 6, there is no homeomorphism of (S 3 , Γ) which induces the automorphism (1234). 337 Consider the subgroup Z 5 ⋊ Z 4 ≤ Aut(K 6 ). The presentation of Z 5 ⋊ Z 4 as a subgroup of S 6 gives 338 the relation x −1 yx = y 2 for some elements x, y ∈ Z 5 ⋊ Z 4 of orders 4 and 5 respectively. Suppose that 339 for some embedding Γ of K 6 , we have TSG(Γ) ∼ = Z 5 ⋊ Z 4 . Without loss of generality, we can assume This rules out all of the groups S 4 × Z 2 , D 4 × Z 2 and Z 4 × Z 2 as possible topological symmetry groups 352 for embeddings of K 6 in S 3 . 353 For the group Z 2 × Z 2 × Z 2 we will use the following result. Since h, f , and g are homeomorphisms of (S 3 , Γ) the links in a given orbit all have the same (mod 364 2) linking number. Since each of these orbits has an even number of pair of triangles, this contradicts 365 Conway Gordon. Thus Z 2 × Z 2 × Z 2 ∼ = TSG(Γ). Hence Z 2 × Z 2 × Z 2 is not realizable for K 6 366 Table 6 summarizes our realizability results for K 6 . Recall that for n = 4 and n = 5 every subgroup 367 of S n is realizable for K n . However, as we see from Table 6, this is not true for n = 6. she was a long term visitor in the fall of 2013.
3,508.4
2012-12-24T00:00:00.000
[ "Mathematics" ]
Spatially restricted subcellular Ca2+ signaling downstream of store-operated calcium entry encoded by a cortical tunneling mechanism Agonist-dependent Ca2+ mobilization results in Ca2+ store depletion and Store-Operated Calcium Entry (SOCE), which is spatially restricted to microdomains defined by cortical ER – plasma membrane contact sites (MCS). However, some Ca2+-dependent effectors that localize away from SOCE microdomains, are activated downstream of SOCE by mechanisms that remain obscure. One mechanism proposed initially in acinar cells and termed Ca2+ tunneling, mediates the uptake of Ca2+ flowing through SOCE into the ER followed by release at distal sites through IP3 receptors. Here we show that Ca2+ tunneling encodes exquisite specificity downstream of SOCE signal by dissecting the sensitivity and dependence of multiple effectors in HeLa cells. While mitochondria readily perceive Ca2+ release when stores are full, SOCE shows little effect in raising mitochondrial Ca2+, and Ca2+-tunneling is completely inefficient. In contrast, gKCa displays a similar sensitivity to Ca2+ release and tunneling, while the activation of NFAT1 is selectively responsive to SOCE and not to Ca2+ release. These results show that in contrast to the previously described long-range Ca2+ tunneling, in non-specialized HeLa cells this mechanism mediates spatially restricted Ca2+ rise within the cortical region of the cell to activate a specific subset of effectors. Results Simultaneous real time Ca 2+ imaging in the cytosol, ER and mitochondria. We initially sought to determine the response of mitochondria to Ca 2+ tunneling downstream of SOCE. Conceptually mitochondria represent an ideal target for Ca 2+ tunneling, as they form close membrane contact sites with the ER (Mitochondria Associated Membranes or MAMs) where IP 3 receptors are enriched 17 . Furthermore, the mitochondria are a major target of Ca 2+ signaling, and play a critical role in buffering cytosolic Ca 2+ transients. The relationship between SOCE and the mitochondria is complex and exhibits some cell specific nuances. In immune cells the mitochondria have been shown to act as a Ca 2+ sink for Ca 2+ flowing through SOCE, which in turn modulates its function [18][19][20] . The mitochondria is further well established as a low pass filter for cytosolic Ca 2+ signals and have been shown to functionally interact with SOCE and modulates its signaling 21 . Furthermore, the subcellular localization of mitochondria affects their response differentially to the source of Ca 2+ . In both HeLa cells and pancreatic acinar cells the source of Ca 2+ was shown to differentially stimulate different populations of mitochondria based on subcellular localization 22,23 . In HeLa cells mitochondria localize away from the plasma membrane 24 , making them an interesting distal target for Ca 2+ tunneling. To test whether the mitochondria act as an effective downstream effector of Ca 2+ tunneling, we imaged Ca 2+ dynamics simultaneously in the mitochondria, ER lumen and cytosol. A recently developed family of Ca 2+ sensors termed CEPIAs (Ca 2+ -measuring organelle-Entrapped Protein Indicators), can be targeted to the ER or to the mitochondria, and allow simultaneous imaging of the two compartments 25 . We thus combined R-CEPIAer (ER lumen Ca 2+ sensor) with G-CEPIA2mt (mitochondrial Ca 2+ sensor) to image Ca 2+ changes in the ER and mitochondria respectively, because their spectral properties allow for imaging cytosolic Ca 2+ as well using Fura-red (Fig. 1). Confocal images reveal a reticular pattern for R-CEPIAer and a more perinuclear and filamentous expression for G-CEPIA2mt (Fig. 1A), consistent with distribution of ER and mitochondria in HeLa cells [25][26][27] . Mobilizing Ca 2+ stores with histamine (100 µM, 30 sec) results in a simultaneous rise in Ca c 2+ and Ca m 2+ showing that mitochondria readily detect Ca 2+ released from stores when the stores are full. Ca 2+ release was coupled to a transient depletion of ER stores which refill due to do recycling of released Ca 2+ (Fig. 1B). However, ER refilling was incomplete due to the absence of extracellular Ca 2+ and thus SOCE, resulting in a sustained low level depletion state of the ER (Fig. 1B), which could be reversed by adding Ca 2+ to the media (not shown). The relative efficiency of the SERCA pump in capturing released Ca 2+ in the absence of Ca 2+ influx was variable from cell to cell. As indicated in Fig. 1B, the ER could partly recover from depletion without the contribution of SOCE. The recaptured amount of Ca 2+ released in the absence of extracellular Ca 2+ varied between 7% to 65% with a mean value of 30.1 ± 5.1% (given as a percentage of the reduction in ER signal and measured before extracellular Ca 2+ re-addition) (Fig. 1B). We then used the classical SOCE protocol of irreversible inhibition of SERCA with thapsigargin (TG, 1 µM) in Ca 2+ -free solution followed by Ca 2+ addition to maximally activate SOCE and temporally isolate it. TG results in Ca 2+ release coupled to emptying of ER Ca 2+ stores but little to no increase in mitochondrial Ca 2+ (Ca m 2+ ) (Fig. 1B). Replenishing extracellular Ca 2+ results in cytosolic Ca 2+ influx through SOCE that was not associated with store refilling since SERCA in blocked (Fig. 1B) The rise in Ca 2+ recorded in the mitochondria was still significant in both cases (Fig. 1C). This confirms previous studies in HeLa cells showing limited coupling between the SOCE microdomain and the mitochondria 23,24 . The peak amplitude of the global cytosolic Ca 2+ signal was similar in response to Ca 2+ release induced by histamine or thapsigargin, or following maximal SOCE induction (Fig. 1C). The mitochondria though responds in a dramatically distinct way to these similar cytosolic Ca 2+ transients. Whereas mitochondria responded readily to the histamine induced Ca 2+ rise, they responded poorly if at all to thapsigargin-induced Ca 2+ release or SOCE (Fig. 1C). This argues for the existence of spatially defined Ca 2+ microdomains, not detectable by current Ca 2+ imaging approaches, but readily sensed by different effectors. Mitochondria could be taking up Ca 2+ released from the ER directly through MAMs when the stores are full. To assess whether this is the case, we recorded at higher scanning speed (1 Hz) to determine the temporal relationship between Ca c 2+ and Ca m Variations in Ca 2+ levels in the cytosol (cyt), mitochondria (Mito), and ER during store depletion induced by histamine (His, grey shading) and thapsigargin in a Ca 2+ -free media (Tg, pink shading), and when SOCE is allowed by the re-addition of Ca 2+ (2 mM, blue shading). Cytosolic Ca 2+ was monitored using Fura-Red, the mitochondria using the fluorescence signal of G-CEPIA2mt, and the ER using R-CEPIAer. Left and right panels come from two different sets of experiments. (C) Bar chart summarizing the variations in Ca 2+ levels in the cytosol and mitochondria during the protocols illustrated in A. (D) Ca 2+ dynamics in response to the reversible SERCA blocker, cyclopiazonic acid (CPA). Cells were exposed to CPA (50 µM) for 10 min in Ca 2+ -free media, and then CPA was washed away for 10 min to allow the SERCA pumps to recover their function. consistently followed the Ca c 2+ transient with a significant delay (3.4 ± 0.2 s, n = 54) (Supplemental Fig. 1), similar to what was reported previously 28 . The SOCE signal recorded with thapsigargin is maximal and non-physiological, since thapsigargin irreversibly blocks SERCA and prevents store refilling. To obtain a better measure of physiological SOCE, we used the reversible SERCA inhibitor cyclopiazonic acid (CPA). CPA induced a store depletion and a Ca c 2+ rise similar to thapsigargin (61 ± 2% for Tg, n = 85 and 56 ± 3%, for CPA n = 29), without any significant increase in Ca m 2+ (Fig. 1D,E). The peak cytosolic Ca 2+ release induced by CPA was slightly smaller than that induced by thapsigargin (0.64 ± 0.04 vs 0.80 ± 0.03, p < 0.05). Exposing the cell to histamine after CPA treatment in Ca 2+ -free solution resulted in a significantly reduced Ca 2+ release (Fig. 1D,E, pink bar), showing that CPA efficiently depletes Ca 2+ stores. When extracellular Ca 2+ was reintroduced, it allowed quick refilling of the stores (Fig. 1D, blue bar), and a detectable but very small SOCE as a rise in Ca c 2+ (Fig. 1D,E, blue bar). The active refilling of the stores shows that CPA was washed out effectively and that SERCA is functional. The SOCE-dependent cytosolic Ca 2+ transient detected under this experimental paradigm is orders of magnitude smaller than that detected with TG, yet it efficiently refills Ca 2+ stores (Fig. 1D,E), arguing that SERCA is effective at limiting Ca 2+ diffusion outside the SOCE microdomain into the bulk cytosol by sequestering Ca 2+ flowing through SOCE channels into the ER lumen. Consistent with this interpretation, SOCE under these conditions does not induce any mitochondrial Ca 2+ rise (Fig. 1D,E). When histamine was applied after reloading of the ER stores, it elicited a large rise in Ca c 2+ and Ca m 2+ indicating normal function of the Ca 2+ signaling machinery and no deleterious effects of the experimental protocol (Fig. 1D,E, grey bar). Mitochondria do not respond to Ca 2+ tunneling. The above data show that the mitochondria in HeLa cells do not respond to Ca 2+ entry through SOCE. However, the experimental conditions above using CPA followed by a wash do not result in the opening of IP 3 receptor and as such would not allow for Ca 2+ tunneling as would be expected in response to agonist stimulation. To directly test whether the mitochondria act as a downstream effector of Ca 2+ tunneling, we devised a protocol to temporally isolate Ca 2+ tunneling from the Ca 2+ release phase. We depleted Ca 2+ stores with CPA (10 min) followed by a wash to release SERCA from inhibition in Ca 2+ -free conditions ( Fig. 2A). This was followed by the addition of histamine (100 µM, 30 s) concurrently with extracellular Ca 2+ re-addition, thus opening IP 3 receptor and allowing for Ca 2+ tunneling ( Fig. 2A). Surprisingly, histamine and Ca 2+ application extracellularly under these conditions results in a large cytosol Ca 2+ rise of similar amplitude to that induce by histamine when Ca 2+ stores are full ( Fig. 2A,B). Under these conditions, histamine cannot induce Ca 2+ release since the stores are empty due to the CPA treatment (see Fig. 1D, pink bar). Therefore, the large Ca 2+ transient observed under these conditions requires that Ca 2+ entering the cell through SOCE be first pumped into the ER by SERCA and then released through IP 3 R, i.e. the classical tunneling pathway. To confirm that our experimental approach induces Ca 2+ tunneling, we analyzed the time course of the Ca c 2+ rise due to tunneling as compared to ER refilling. As shown in Fig. 2C, the Ca c 2+ signal due to Ca 2+ tunneling precedes the initiation of ER refilling by tens of seconds, a phase during which Ca 2+ flowing through SOCE is preferentially taken up into ER stores and release again through IP 3 R to expand the spatial spread of SOCE. Presumably, the large conductance of IP 3 Rs prevents store refilling during this phase. As IP 3 is metabolized IP 3 Rs close allowing SERCA to refill the stores (Fig. 2C). To confirm our interpretation that during Ca 2+ tunneling Ca 2+ leak through IP 3 receptors delays store refilling, we superimposed the time course of ER refilling during Ca 2+ tunneling as in Fig. 2A (blue bar) with that during Ca 2+ refilling in the absence of histamine as in Fig. 1D (blue bar). There is a clear statistically significant (p < 0.001) delay in store refilling during tunneling where the stores require 106.4 + 7.0 sec to reach half-maximal filling compared to 62.1 + 5.3 sec in the absence of histamine (Fig. 2D,E). Surprisingly though and despite its large amplitude, the Ca c 2+ rise due to tunneling did not produce a Ca 2+ rise in the mitochondria ( Fig. 2A,B; blue bar). However, when the stores where replenished, a second identical application of histamine resulted in a large rise in Ca m 2+ ( Fig. 2A,B, grey bar). This shows that Ca 2+ signals generated by Ca 2+ tunneling downstream of SOCE or by agonist-dependent mobilization when Ca 2+ stores are full are not equivalent in terms of inducing a Ca 2+ response in the mitochondria despite the fact that they result in an equivalent Ca c 2+ rise and are both mediated through IP 3 receptors. Therefore, Ca 2+ tunneling and Ca 2+ release are distinct in their ability to activate downstream effectors. These results argue that the coupling between Ca 2+ influx through SOCE, uptake by SERCA and release into the cytosol when Ca 2+ tunneling is operational is highly efficient at raising cytosolic Ca 2+ in a spatially localized fashion as it is not detected by mitochondria. We further confirmed that Ca 2+ flowing through SOCE is not limiting in terms of producing a Ca m 2+ response in these experiments by raising extracellular Ca 2+ to 10 mM during the tunneling protocol to increase Ca 2+ flow through SOCE. This results in a similar cytoplasmic signal and did not restore the mitochondrial signal (Supplemental Fig. 2). Another prediction from the Ca 2+ tunneling pathway is that it would be slower in mediating the Ca c 2+ rise as compared to Ca 2+ release on full stores. The tunnel mechanism requires two more steps as compared with Ca 2+ release: first Ca 2+ influx through SOCE channels and second uptake in the ER lumen by SERCA, independently of a potential delay required for the diffusion of Ca 2+ into the ER cisternae. It was not possible to resolve this potential delay in the imaging experiments of the three compartments simultaneously because of the slow sampling speed (0.1 Hz). To allow a higher speed of recording (1 Hz), we loaded the cells solely with the Ca 2+ indicator Fluo4-AM and performed the same protocol as in Fig. 2. The analysis of the time course using either a "virtual" line scan or a global measurement of the Ca c 2+ over time shows a significantly slower rise in the Ca c 2+ mediated by Ca 2+ tunneling as compared to Ca 2+ release on full stores, although both signals ultimately reached similar amplitudes (Supplemental Fig. 3). Another formal possibility in mediating the mitochondrial Ca 2+ respond to Ca 2+ release as compared to Ca 2+ tunneling is the spatial localization of the mitochondria in relationship to the Ca 2+ source. Depending on the cell type (and physiological conditions) the mitochondria can be positioned close to the plasma membrane as in immune cells where they regulate SOCE 19,20 or deeper in the cell where they preferentially interact with the ER 23 . We therefore tested whether store depletion affects the relative position of mitochondria in HeLa cells. However, we could not detect changes in mitochondrial localization using either confocal or TIRF imaging (Supplemental Fig. 4). We also evaluated the changes in the morphology of the mitochondria after store depletion using 3D imaging, although we detected some reduction (9%) in the length of mitochondria branches the effect was small and irreversible after Ca 2+ readdition, and therefore unlikely to explain the differential effect of Ca 2+ and Ca 2+ tunneling on mitochondria (Supplemental Fig. 5). Taken together our results reveal that given the spatio-temporal dynamics of Ca 2+ release versus Ca 2+ tunneling, and despite the fact that both rely on IP 3 Rs and are associated with an equivalent rise in global Ca c 2+ , mitochondria respond readily to Ca 2+ release but not Ca 2+ tunneling. We hypothesize that the slower speed of Ca c rise during tunneling, combined with the distance between the point source of Ca 2+ entry (the Orai channel) and the target (i.e. the mitochondria) impair the formation of a "Hot Spot" or high Ca 2+ domain between the ER and the mitochondria that would allow the MCU to import Ca 2+ during tunneling. Ca 2+ -activated potassium channels. The global Ca 2+ rise detected following tunneling argues for a spatially localized cytosolic Ca 2+ transient. The mitochondria, which tend to localized deeper within HeLa cells, are unable to detect this transient. In Xenopus oocytes Ca 2+ tunneling is particularly effective at stimulating Ca 2+ -activated Cl channels at the PM 7 . Therefore, to test whether Ca 2+ -dependent PM localized effectors respond to Ca 2+ tunneling, we turned to Ca 2+ -activated K + channels (K Ca ), which are expressed in HeLa cells 29 . Histamine-dependent Ca 2+ release when stores are full induces a transient K Ca current ( Fig. 3A and Supplemental Fig. 6). Similarly, Ca 2+ tunneling was effective at activating K Ca (Fig. 3A,B, Tun), but with a smaller peak amplitude than the current induced by histamine from full stores (Fig. 3A,B, Rel). The tunneling-induce K Ca was longer lasting though, leading to a larger charge transfer for an identical histamine stimulation (Fig. 3B). Comparatively, a much smaller maximal gK Ca current was observed in response to SOCE, using the CPA-wash protocol (Fig. 3B, SOCE). This indicates that Ca 2+ tunneling activates gK Ca at the PM, significantly more efficiently than SOCE (Fig. 3B), in a similar fashion to what is observed with Ca 2+ -activated Cl channels 7 . This is consistent with data from human submandibular gland cells where gK Ca activation by Ca 2+ influx required the uptake of Ca 2+ in the ER stores 30 . The differential response of gK Ca and the mitochondria argue that Ca 2+ tunneling is effective at raising cytosolic Ca 2+ levels in the cell cortex close to the PM but not deep within the cell. To record gK Ca we used the whole-cell patch-clamp, which modifies the intracellular environment after breaking into the cell and could have deleterious effects on Ca 2+ buffering and spatial dynamics. We therefore sought a non-invasive approach to visualize cortical Ca 2+ transients and confirm the results obtained with gK Ca . We used a membrane targeted Ca 2+ -sensor Lck-GCamp5G 31 , coupled to TIRF microscopy to measure Ca 2+ changes specifically at plasma membrane level. Bath application of histamine induced a fast transient elevation of Ca 2+ in the sub-PM layer (Fig. 4A,B). Comparatively, Ca 2+ influx through 'physiological' SOCE activated using the CPA-washout protocol results in a smaller, slower and longer lasting cortical Ca 2+ increase (Fig. 4B). In contrast, when SOCE was induced maximally with thapsigargin it results in larger cortical Ca 2+ influx (Fig. 4B,C), thus confirming that irreversible inhibition of SERCA enhances cortical Ca 2+ transients. In agreement with the gK Ca data, Ca 2+ tunneling results in a slow and long lasting increase in sub-PM Ca 2+ of significantly higher amplitude and duration then SOCE (Fig. 4B,C). In a pattern similar to what we observed with gK Ca the amplitude of the tunneling signal was smaller than the release induced by histamine on full stores, but significantly longer lasting, creating a larger transfer of total Ca 2+ (evaluated using the area under the Ca 2+ traces) (Fig. 4D). Interestingly, during tunneling the total amount of Ca 2+ ions in the cell cortex is equivalent to that observed with thapsigargin-induced SOCE (Fig. 4D). This highlights the efficiency of the pump-leak pathway at the ER membrane during tunneling, and the conversion of the slow conductance of SOCE channels into a sustained signal at the sub-PM using the high-conductance leak of IP 3 receptors. NFAT1 translocation. A well characterized effector that responds exquisitely to Ca 2+ in the SOCE microdomain and not Ca 2+ release from stores is the calcineurin-NFAT signaling pathway 14,32 . NFAT1 is a transcription factor, phosphorylated at rest and dephosphorylated following the activation of calcineurin by SOCE, which leads to its translocation to the nucleus (Fig. 5A). The effect of Ca 2+ tunneling on gK Ca argues that it extends the SOCE Ca 2+ microdomain in the cortical region of the cell and activators effectors accordingly. Therefore, Ca 2+ tunneling should not affect the activation of calcineurin-NFAT as it is not expected to alter the SOCE microdomain. We therefore tested NFAT nuclear translocation in response to Ca 2+ release, SOCE and Ca 2+ tunneling (Fig. 5). Consistent with previous reports, Ca 2+ release induced by thapsigargin, CPA or histamine did not induce NFAT1 translocation (Fig. 5B). In contrast, when SOCE was activated with either thapsigargin or CPA, it effectively induces NFAT1 translocation to the nucleus although it occurred with a faster time course in response to TG (Fig. 5A-C). Ca 2+ tunneling results in higher levels of NFAT1 translocation with similar kinetics recorded under the plasma membrane following: Ca 2+ release from full Ca 2+ stores with Histamine (Release); Ca 2+ tunneling with SOCE re-fueling the ER and releasing Ca 2+ through IP 3 Rs stimulated by histamine (Tunnel); and SOCE induced by CPA after washout or after store depletion by thapsigargin (Tg). (C) Bar chart summarizing the peak amplitude of the SOCE signals induced after store depletion with either thapsigargin or CPA, and that induced following Ca 2+ release and in response to Ca 2+ tunneling. (D) To account for the difference in the signal kinetics, the area under the trace was integrated over a 5 min period and summarized in a bar char. The number of cells is indicated above the bars, statistics are according to ANOVA followed by Tukey's multiple comparison test. as those observed in response to CPA (Fig. 5C). This is likely due to the longer duration of the Ca 2+ signal in the SOCE microdomain when Ca 2+ tunneling is active, due to the continuous pump-leak of Ca 2+ at the ER membrane. Alternatively, it may indicate some additional activation of calcineurin outside the SOCE microdomain in the sub-PM domain. Therefore, as expected Ca 2+ tunneling does not dramatically alter the activation of the calcineurin-NFAT axis. Discussion Agonist stimulation through GPCRs or receptor tyrosine kinases often couples to PLCs resulting in the generation of Ca 2+ transients that in non-excitable cells tend to be biphasic, with the initial release phase due to Ca 2+ mobilization from intracellular Ca 2+ stores being rapid and of high amplitude but short lived as the stores empty. This is followed by a sustained phase of Ca 2+ influx, with varying duration based on the cell type and the agonist, that is due to the activation of SOCE. Experimentally, protocols have been devised to temporally separate the Ca 2+ release and SOCE phases as it allows for a better dissection of their relative contributions. However, physiologically the two processes are tightly linked and overlapping with SOCE being activated as stores gradually deplete and most likely in a spatially complex fashion. Therefore, in the cycle of IP 3 -dependent Ca 2+ release, store depletion, SOCE activation, store refilling and SOCE inactivation there is a time window where SOCE is active while IP 3 receptors are still open resulting in a pump-leak at the ER membrane, due to Ca 2+ uptake by SERCA within the SOCE microdomain while Ca 2+ is released through open IP 3 receptors, a process known as Ca 2+ tunneling. Conceptually this is reminiscent of the 'capacitative Ca 2+ influx' model originally proposed by Jim Putney 33 , where he envisioned Ca 2+ flowing directly from the extracellular space into the ER lumen to refill the stores, before the plethora of signaling roles of SOCE in addition to store refilling were appreciated. Ca 2+ tunneling adds a new dimension for Ca 2+ signaling downstream of SOCE by allowing Ca 2+ influx to activate effectors that are spatially far away from the point source entry at SOCE puncta. This would be essential for SOCE to activate different effectors that do not localize to the SOCE microdomain, especially in cells such as HeLa where the SOCE puncta after store depletion are estimated to occupy <1% of the PM 12 . Furthermore, an essential feature of Ca 2+ tunneling is to bypass the highly buffered cytosol to allow Ca 2+ ions to reach their target by using the ER lumen as a tunnel given it's lower buffering capacity 34 . Ca 2+ tunneling was originally described in pancreatic acinar cells where it transports Ca 2+ entering at the basolateral side of the cell through ER tunnels to the apical side where IP 3 Rs localize (see Petersen et al. 2017 for a recent review) 2,16 . Ca 2+ tunneling was more recently generalized through studies in frog oocytes, showing dramatic remodeling of the Ca 2+ signaling machinery at the PM in response to store depletion to support Ca 2+ tunneling, where it targets both Ca 2+ -activated Cl channels and the IP 3 R itself to modulate tonic versus oscillatory Ca 2+ signaling 7,35 . The same mechanism of using ER tunnels to transport Ca 2+ to effectors has been alluded to, although not directly investigated, in other studies where it targets the Ca 2+ -activated K channel, nuclear NFAT activation, and Ca 2+ transport from the soma to maintain Ca 2+ signaling in dendrites 14,30,36 . In this study, we were interested in testing the functionality of Ca 2+ tunneling in non-polarized or specialized cells that differ from pancreatic acinar cells, neurons or oocytes for instance, using multiple different effectors of different nature with distinct spatial distribution (organelle, channel, and signaling molecule). We use simultaneous imaging of the three primary Ca 2+ signaling compartments in HeLa cells (cytosol, ER and mitochondria) to assess the functionality of the Ca 2+ tunneling mechanism and its ability to specifically and selectively activate downstream effectors. Our results show that Ca 2+ tunneling is functional in HeLa cells downstream of store depletion (Fig. 2), where using combinations of agonist stimulation, manipulation of extracellular Ca 2+ , and a simple CPA-wash protocol, allows us to separate Ca 2+ release, SOCE and Ca 2+ tunneling to assess the effect of each Ca 2+ signaling modules on downstream efforts. At the outset of the study an attractive target for Ca 2+ tunneling was the mitochondria given the well documented intimate physical interaction between the ER and mitochondria through the MAMs, and their distribution away from the PM in HeLa cells 23 . Surprisingly, we show that mitochondria in HeLa cells respond readily to agonist-dependent Ca 2+ release when stores are full with a delay after a Ca c 2+ rise (Fig. 1), but do not respond to Ca 2+ tunneling despite the fact that the Ca c 2+ signal reaches a similar amplitude globally (Fig. 2). In contrast Ca 2+ tunneling was more effective than Ca 2+ release and far more effective than SOCE at activating gK Ca and at raising sub-PM Ca 2+ levels. Mitochondria in HeLa cells do not respond well to Ca 2+ flowing through SOCE 23 , a finding that we have confirmed here (Fig. 1). However, the mitochondrial response to Ca 2+ tunneling is even poorer than to maximal SOCE stimulated by thapsigargin (Fig. 2B). The chain of channels and pumps mediating Ca 2+ tunneling in series could potentially explain this observation. Ca 2+ influx through SOCE initiates Ca 2+ tunneling followed by Ca 2+ uptake into the depleted ER through SERCA activity, and then release through open IP 3 Rs. The limiting factor as discussed below in this Ca 2+ transport chain is the SERCA pump given its low flux. Estimates of the single channel Ca 2+ current through the IP 3 R under physiological conditions are 0.1-0.2 pA 37 , corresponding to a rate of 6 × 10 5 Ca 2+ /sec at the lower end of the spectrum. The rate of SERCA2b uptake was estimated at ~40 Ca 2+ /sec at Vmax 6,38 , and the flux through an Orai channel is estimated at 5000 Ca 2+ /sec considering a P o of 0.8 (see Hogan for a detailed discussion) 6 . These estimates argue that during Ca 2+ tunneling the leak through any open IP 3 R is orders of magnitude higher than the trickle of Ca 2+ that can be fed into the ER by SERCA. This would ensure that the ER does not refill readily allowing Ca 2+ tunneling to proceed for tens of seconds. Assuming that IP 3 Rs are evenly distributed throughout the ER membrane and opening stochastically the first open IP 3 R encountered would release the Ca 2+ fed into the ER by 15,000 SERCA pumps operating at full capacity upstream. This implies that during tunneling Ca 2+ entering the cell through SOCE would not be able to travel deep within the cell as the ER acts as a sieve with open IP 3 Rs resulting in Ca 2+ leak from the cortical ER, thus preventing Ca 2+ from reaching deep within the ER to activate mitochondria (Fig. 6). This model is attractive, as it would also explain the differential response of gKca and mitochondria to Ca 2+ tunneling (Fig. 6). The remodeling of the Ca 2+ signaling machinery in response to store depletion results in SOCE as a point source Ca 2+ entry that localizes to focal sites at the PM and as such directionally feeding Ca 2+ into the cell. At this point, we cannot rule out other contributing factors such as the slower speed of Ca 2+ tunneling as compared to Ca 2+ release or some kind of restructuring to the MAMs (Supplemental Fig. 3). The flux estimates discussed above would also predict leakage of Ca 2+ from the SOCE microdomain since the flow of Ca 2+ through Orai channels in SOCE puncta would be predicted to overwhelm SERCA pumps that localize to the puncta. Consistently, we observe an increase in sub-PM Ca 2+ and activation of gKca in response to SOCE. However, Ca 2+ tunneling greatly enhance both as would be expected from the leak through IP 3 Rs (Fig. 6). Furthermore, Ca 2+ tunneling by maintaining the ER depleted extends the duration of SOCE and allows it to more effectively activate downstream effectors. As shown in Fig. 3, although the amplitude of gK Ca induced by Ca 2+ tunneling is much smaller than that induced by Ca 2+ release, the total charge transfer is significantly greater in response to Ca 2+ tunneling. This is reflected as well in the Ca 2+ signal in the sub-PM domain (Fig. 4). NFAT activation in contrast to gK Ca or the mitochondrial response, is quite specific to a Ca 2+ rise in the SOCE microdomain as it is activated with equal efficiency whether SOCE is induced maximally using thapsigargin or to physiological levels using the CPA-wash protocol (Fig. 5). This is consistent with previous studies 14,32,39 . There is a statistically significant increase in NFAT translocation when Ca 2+ tunneling is activated (Fig. 5B). This could be due to either the increased duration of the Ca 2+ signal in the SOCE microdomain due to Ca 2+ tunneling (Fig. 4B) or alternatively activation of calcineurin that localizes outside the SOCE clusters that can be targeted by Ca 2+ tunneling. Collectively, our data show that Ca 2+ tunneling is functional in HeLa cells and expands the specificity of the Ca 2+ signaling machinery toward downstream effectors and sub-cellular domains. The response of the mitochondria and gK Ca shows that Ca 2+ tunneling is particularly effective at raising Ca 2+ levels in the cortical cytoplasm next to the PM. This can be explained by the high conductance of IP 3 Rs as discussed above, leading to a strong leak that favors Ca 2+ release in the cortical region close to the SOCE point source Ca 2+ entry. It therefore appears that Ca 2+ tunneling is a specialized module to raise cortical Ca 2+ levels thus effectively expanding the SOCE microdomain. This is somewhat distinct from the long range Ca 2+ transport due to tunneling in pancreatic acinar cells, which supports the transport of Ca 2+ through the ER lumen from the basolateral to the luminal end of the cell. Ca 2+ tunneling in addition to expanding the spatial spread of the SOCE microdomain also modulates the temporal dynamics of SOCE by extending the duration of SOCE by maintaining a depleted ER and lower Ca 2+ levels in the SOCE microdomain due to the continuous pumping of Ca 2+ into the ER lumen by SERCA and its release through IP 3 Rs (Fig. 6). Finally, and consistent with our findings, Thillaiappan et al. recently showed that licensed IP 3 Rs preferentially localize close to the PM in the cell cortex and at the periphery of STIM1 clusters 40 . Such a localization of immobile IP 3 Rs is ideally suited to support Ca 2+ tunneling and in particular providing it with its cortical spatial specificity as discussed herein. Methods Cell culture and solutions. Hela cells were cultured in DMEM media containing 10% fetal bovine serum supplemented with penicillin (100 units.ml −1 ) and streptomycin (100 µg.ml −1 ). The cells were plated 24 h before transfection on poly-lysine coated glass-bottom dishes (MatTek, U.S.A). For all live cells experiments, the cells were continuously perfused using a peristaltic pump (Gilson Minipuls) at the speed of 1 ml.min −1 . The standard saline contained (in mM) 145 NaCl, 5 KCl, 2 CaCl 2 , 1 MgCl 2 , 10 Glucose, 10 HEPES, pH 7.2, for Ca 2+ -free experiments, the Ca 2+ was exchanged equimolarly with Mg 2+ . Intracellular Ca 2+ imaging. To perform organelle Ca 2+ imaging HeLa cells were transfected with the Ca 2+ indicators G-CEPIA2mt and R-CEPIAer constructs (0.5 µg per dish) using a standard lipofectamine 2000 (Invitrogen) procedure 24 h to 48 h prior to imaging. Both constructs were obtained from addgene (#58218 and #58216 respectively) and were originally created by Masamitsu Lino's group 25 . To image cytoplasmic Ca 2+ the cells were loaded for 45 min with Fura Red-AM at 37 °C (1 µM from a 1 mM stock in 20% pluronic acid/DMSO). Imaging was performed on a Leica TCS SP5 confocal system (Leica, Germany) fitted with a 63x/1.4-06 oil immersion objective using an open pinhole. The G-CEPIA2mt was excited using a 488 nm laser line and the emission collected at 500-590 nm. The same line was used to excite the Ca 2+ -free form of FuraRed and the emission collected between 600-709 nm. For R-CEPIAer the excitation was performed with a 561 nm laser line and the signal collected between 583-649 nm. The frame rate was set to 0.1 Hz unless stated otherwise. For Fluo4 imaging, the cells were loaded with 1-2 µM Fluo4AM (45 min/37 °C). The excitation was performed using a 488 nm laser line and the signal collected at 500-560 nm with the pinhole at 1 airy unit and the frame rate was set to 1 Hz. Morphological analysis. Cells expressing G-CEPIA2mt and R-CEPIAer were fixed (PFA 4%, 10 min) and stained with Alexa633 tagged Wheat Germ Agglutinin (2 µg.ml −1 ) (Invitrogen). Confocal images were acquired every 0.5 µm to generate z-stacks. The imaging was performed on a Zeiss LSM880 controlled by Zen Black 2.3 (Zeiss, Germany) and fitted with a 63x/1.4 objective. The imaging parameters were as follows: for G-CEPIA2mt: λ ex = 488 nm, λ em = 494-568 nm, for R-CEPIAer: λ ex = 561 nm, λ em = 570-622 nm and for WGA-Ax633: λ ex = 633 nm, λ em = 640-747 nm. The distance between the plasma membrane, the ER and the mitochondria was evaluated using the profile of a linear region of interest drawn between both sides of the cell and the distance measured at 50% of the peak amplitudes of the signals. Whole-cell patch-clamp. The Ca 2+ activated K + channels were recorded using a standard whole-cell patch-clamp protocol. Patch pipettes (resistance ranging from 4 to 6 MΩ when filled with the pipette solution) were sealed to the plasma membrane and the patch ruptured after the formation of a giga-ohm seal. The cells were voltage-clamped at 0 mV at steady state using an Axopatch 200B amplifier (Molecular Devices, U.S.A) controlled by pClamp 10. The internal pipette solution contained (in mM) 140 K-Gluconate, 2 NaATP, 2 MgCl 2 , 10 HEPES, 1 µM EGTA and pH 7.4. The extracellular solutions and perfusion system was the same as the imaging experiments. Total Internal Reflection Fluorescence Microscopy (TIRF). For the localization at the plasma membrane, the cells where transfected with the Orai1-mCh, STIM1-CFP and CEPIA2mt constructs, and the stores depleted with thapsigargin. The membrane plane was identified by the presence of clusters of STIM1 and Orai1, and used to adjust the evanescent wave. The imaging was performed on a Zeiss Cell observer TIRF system using the following parameters for STIM1-CFP: λ ex = 405 nm, λ em = 446-468 nm, for G-CEPIA2mt: λ ex = 488 nm, λ em = 510-555 nm and for Orai1-mCh: λ ex = 561, λ em = 581-679 nm. For TIRF Ca 2+ imaging at the plasma membrane, the cells were transfected with the Ca 2+ sensor Lck-GCamp5G (Addgene #34924) 31 . The sensor was excited at λ ex = 488 nm and the images collected using λ em = 510-555 nm, the frame rate was 0.1 Hz. The perfusion system and solutions was identical to the previous experiments. NFAT1 translocation. Cells were transfected for 24 h with the NFAT1-GFP construct (Addgene #11107) 41 , the imaging was performed using the same settings as the TIRF imaging for GFP-tagged proteins except that the mirror was set vertically to obtain a widefield image. The perfusion saline and system were as previously described. Data analysis and statistics. The imaging data was quantified using FIJI/ImageJ 1.51 n 42,43 and ZenBlue 2.3 (Zeiss). The patch-clamp data was analyzed with Clampfit 10.0 (Molecular Devices). Statistics and data analysis were performed using Graphpad Prism 7.02 (GraphPad U.S.A). Values are given as means ± S.E.M and statistics were performed using either paired or unpaired Student's t-test or ANOVA followed by Tukey's test for multiple comparisons. P-values are ranked as follows *P < 0.05, **P < 0.01, ***P < 0.001.
8,920.8
2018-07-25T00:00:00.000
[ "Biology" ]
Civil unrest and herding behavior: evidence in an emerging market Abstract This investigation is the first to analyze how civil unrest impacts herding behavior in an emerging economy. Using series of prices and daily traded volumes of the companies that make up the IGPA of the Santiago Stock Exchange between 2010 and 2020, it was found that civil unrest causes reverse herding behavior in the Chilean stock market. Thus, herding behavior and inverse herding behavior are more complex behaviors than the financial literature has reported to date, especially in a period of civil unrest. Different robustness tests support the findings. Introduction Civil unrest consists of acts of protest and/or attacks by certain groups of civilians and/or the mass population against the government. These riots can take the form of peaceful, disruptive, or violent demonstrations, strikes and acts of violence (Lancaster, 2018). The consequences of civil unrest are diverse, and can be disastrous, and cover various areas such as human rights, economy, education, health, and infrastructure, among many others. In financial markets, one way for investors to overcome periods of crisis is adopting herding behavior (Andrikopoulos et al., 2017;Omay & Iren, 2019). This behavior allows them to imitate the actions of others using the same sources of information and interpreting signals sent to the market in an identical way, and, consequently, making similar financial decisions (Hirshleifer et al., 1994). In this way, acting in a "herd," investors would safeguard the value of their assets in periods of crisis. Furthermore, when the social interactions among traders are strong, an extremely small bubble may cause a sufficiently large number of traders to engage in herd behavior (Chang, 2014). But can an economic-financial crisis be considered similar to civil unrest? Definitely not. During civil unrest human rights violations and even deaths occur. The consequences of these acts shape new behaviors of people that even span generations. These situations do not occur in periods of economic-financial crisis. Furthermore, it is known that in serious economic-financial crises the government has operated as a lender of last resort to rescue the market. However, during civil unrest it is the government that is questioned. Considering that civil unrest is a different situation from that of an economic-financial crisis, it is possible to expect two situations in the stock markets. First, that the people who participate in the stock markets interpret this situation as a crisis and favor herding behavior; or second, that some of them go ahead and act individually, quickly causing a reverse herding behavior. Until today it has not been reported in the financial literature how civil unrest impacts herding behavior The main objective of this article is to investigate how civil unrest impacts herding behavior. For this, series of prices and daily traded volumes of the companies that make up the General Index of Stock Prices of the Santiago Stock Exchange (S&P/CL IGPA) are used from January 1, 2010 to February 04, 2020 and the model proposed by Chang et al. (2000), which is a modification of the model proposed by Christie and Huang (1995). The period of civil unrest is considered from October 19, 2019 onwards since this day President Sebasti an Piñera declares a State of Emergency in the capital of Chile, Santiago, due to the increasingly violent riots since October 7. The results show that civil unrest caused reverse herding behavior in the Chilean stock market. Robustness tests support the findings. In turn, the results are not in line with those of Nath and Brooks (2020) as well as with those of Christie and Huang (1995) who point out that herding behavior should appear during extreme and crisis periods. This confirms that civil unrest is different from a traditional economic-financial crisis and its impact on the stock market is also different. The findings of this study are relevant to the financial sector (investors, regulators, brokers), the political sector (government, congress), analysts and academics. This study is structured as follows. Section 2 briefly reviews the literature and research hypotheses. Section 3 presents the methodology and data used in this paper. Section 4 presents and discusses the empirical findings and highlights limitations of the current study. Section 5 concludes the paper. Herding behavior in financial markets Herding behavior is a process where investors imitate the actions of others. It is a tendency for investors to follow the same sources of information, interpreting signals sent to the market in an identical way, and, consequently, making similar financial decisions (Hirshleifer et al., 1994). It can be found an extensive literature on herding behavior in financial markets over developed countries (USA, Nicolis and Sumpter (2011); Central and East European, Pochea et al. (2017); Germany, Mueller and Brettel (2012); and Spain, Blasco et al. (2012); among others). Similarly, there is extensive evidence on various financial markets among which the stock markets stand out (Economou et al., 2011), the gold market (Boako et al., 2019), hedge fund (Ennis & Sebastian, 2003) and bank markets (Waheed & Mathur, 1993) among others. From developed markets the evidence has spread to countries in Asia (Najmudin et al., 2017) and various emerging countries such as Malaysia, South Korea (Chung et al., 2016), Pakistan (Chaudhry & Sam, 2018), the Gulf Arab stock markets Abu Dhabi, Dubai, Kuwait, Qatar and Saudi Arabia (Balcilar et al., 2013), the Asian and Latin American markets (Kabir & Shakur, 2018), Israel (Andronikidi & Kallinterakis, 2010), Russia (Ind ars et al., 2019) and Greek (Economou et al., 2016), among many others. These countries use different techniques and incorporate some specific characteristics of these markets such as agency, information, efficiency, behavioral problems, expansionary monetary policy, increases in foreign portfolio investment and contagion factors in financial markets, among others, to explain herding behavior. In this context, studies on herding behavior surrounding events such as civil unrest are those that consider times of crisis, such as the subprime crisis and the Asian crisis (Andrikopoulos et al., 2017 andOmay &Iren, 2019, respectively). Andrikopoulos et al. (2017) consider a sample of Euronext member countries reporting intraday herding behavior before, during and after the period of said financial crisis , with its presence being less prominent during the crisis. Omay and Iren (2019) report herding behavior for Malaysia during the Asian crisis for both foreign and domestic investors. In addition to that, there is also research that incorporates social aspects to explain herd behavior. In this vein, this behavior would occur when strong enough social interactions are present in individual investors (Chang, 2014). In summary, the evidence is overwhelming pointing out that herding behavior is present in the financial markets. However, there is little evidence on how social interactions can influence this behavior and as far as we know there are no studies investigating how civil unrest impacts herding behavior. Civil unrest in financial markets An exploratory review of articles on civil unrest in financial markets in the database of Web of Science (WOS), 1 that searches in titles, abstracts, and keywords made on May 30, 2020, returned 4 articles and only 1 of them, which investigates the relationship between armed and civil conflicts and oil prices, has some relationship with civil unrest. In this article, Noguera (2016) reports three civil-political unrests, among 32 geopolitical events in his sample, which correspond to the countries of Nigeria and Venezuela and are focused, like all the selected ones, on observing how these events impact the price of oil. A broader search in the same WOS database that included the keyword "Economy" yielded 48 articles during the period from 1995 to 2019, although in their review no more than 10 articles have any relation to the financial sector. In this group of articles, Brunnschweiler and Lujala (2019) show that a relative economic backwardness is related to a greater probability of the appearance of violent and nonviolent forms of civil unrest for a sample during the period between 1946 and 2011 including 163 independent countries. Although the authors do not propose that economic backwardness is the main explanation for mass demonstrations or armed civil conflict, the results suggest that civil unrest may appear more naturally in economies that have a delay in their economic development, such as emerging economies, which is in line with studies reporting evidence that civil unrest is more likely to arise in countries with weak economic conditions (Collier & Hoeffler, 2004;Elbadawi & Sambanis, 2002) and where their economies are more based on extraction and production of commodities (Collier & Hoeffler, 2005;Dunning, 2005). On another note, Chapman and Reinhardt (2013) found that increases in the sovereign risk of 59 undeveloped countries for the period 1990-2004 are positively related to higher probabilities of civil unrest. Weinberg and Bakker (2015) investigated the relationship between food prices inside a country and social unrest, reporting a positive relationship between them. Barry et al. (2014), who use a cross-national sample of 125 countries from 1981 to 2007, find that freedom of foreign movement and the existence of economic opportunities abroad reduce civil unrest. Kibria et al. (2020) using a sample of 34 countries in Sub Saharan Africa (SSA) between 1972 and 2013 found that an increase in foreign direct investment reduces the risk of civil violence for skilled-labor intensive fuel-resource rich SSA countries. Power (2018), from a relative-deprivationtheory perspective, reports for Ireland that the gap between expectations and lived experiences galvanized and legitimized protest and civic discontent. Power (2018) shows that civil unrest arises even when there is objective information about economic growth, therefore there is a subjective aspect that leads people to social discontent. Other research has investigated how specifically financial markets have reacted to scenarios of war, economic turbulence, and geopolitical unrest. For example, Leigh et al. (2003) compared price movements in Oil Futures Markets with the Saddam Security prior to the Iraq war in order to assess how it would affect the evolution of oil prices. The authors found that the probability of war increases the price of oil in the short run, and that USA stock market is extremely sensitive to changes in the likelihood of war. Antonakakis et al. (2017) investigated the dynamic structural connectedness between oil price shocks and stock market returns and volatility of 11 major stock markets around the world (net oil-importing and net oil-exporting countries) for the period 1995-2013. Among their findings they report a peak in the connectedness during periods of economic turbulence and geopolitical unrest, such as, the Great Recession, the 2nd war in Iraq and the Start of the Arab Spring. Suzuki and Miah (2010) report that frequent civil unrest which leads to high level of uncertainty is one of the causes of shallow financial development in Georgia and Hong et al. (1999) report that South African boycott had little valuation effect on the financial sector. The authors of this article studied the financial effects of the most visible and successful instance of social activism in investment policies, the boycott of South Africa designed to speed the end of the apartheid regime in 1986. No evidence was found that the antiapartheid shareholder and legislative boycotts affected the financial sector adversely. Civil unrest and herding behavior in Chile Considering the literature on herding behavior and civil unrest, this research focuses on Chile. This is done for two reasons. First, Chile is a small and open economy that has a well-developed capital market, a significant number of companies participate in it, besides, it has low levels of corruption, a good judicial system, and open and regulated financial markets (Securities Market Law, Public Offerings and Corporate Governance Law). Furthermore, Chile was the first country in the region to establish a private pension fund system which participates in the stock market, giving it greater liquidity and depth. Finally, it is a member of the Latin American Integrated Market (MILA) which gives companies that participate in the capital market greater financing opportunities. As a consequence, investors find greater investment opportunities. All this allows testing herding behavior in a market without intervention and relatively developed. Second, although various civil unrests have occurred in Chile in the last decade (Ays en, 2012;Magallanes, 2011;Los Lagos, 2016), the most recent one called "Estallido social en Chile" or "Revoluci on de los 30 pesos" is the most important in the history of the country in democracy. Its origin was on October 7, 2019, manifested in the mass evasion of payment in the Santiago Metro by hundreds of secondary students in response to the rise of 30 pesos (0.05 USD) in the fare of the public transport system of Santiago. In the following days, these acts increased, and people of different natures got involved. On October 18, different demonstrations and attacks were carried out on the Santiago Metro stations, which led to the complete closure of its service on its two main lines. The demonstrations continued with clashes between civilians and the police, barricades in some sectors, incendiary attacks and "cacerolazos" (pots-and-pans banging) in much of Santiago. In the early morning of the following day, President Sebasti an Piñera declares a "State of Emergency" in Santiago and the Chacabuco region. This same day the protests continued, including looting that was spread throughout the country. From the following day, massive demonstrations began throughout the country. On October 25, a big march was held, calling more than 1.2 million people in Santiago alone. The consequences of civil unrest in Chile have been diverse and in different areas such as human rights, economy, education, health, and infrastructure, among many others. In economic terms, the Monthly Economic Activity Indicator (Imacec) for the month of October 2019 was À3.4%, being the lowest in the last 10 years. In November it was À3.3%. All of the above makes Chile a natural setting that allows recent observation of how civil unrest can impact herding behavior in stock market. Considering that Ind ars et al. (2019) point out that herding behavior is more pronounced in emerging economies than in developed economies, which is supported by less transparency and quality of information that exists in emerging economies (Balcilar et al., 2013), which was reported by Chang and Lin (2015) who investigated 50 international markets for the period 1989-2011 reported herding behavior in the Chilean stock market. In the same line, Nath and Brooks (2020) as well as Christie and Huang (1995) point out that herding behavior should appear during extreme and crisis periods, and that an increase in herding behavior in stock market of Chile during civil unrest would be expected. However, two situations cast doubt on this hypothesis. First, Chiang and Zheng (2010) who investigated 18 international markets including Chile for the period 1989-2009 reported reverse herding behavior, which compares with the study by Chang and Lin (2015) that herding behavior is sensitive to the selected period; and second, the current civil unrest in Chile, due to its magnitude, consequences and duration, is different and more profound than other civil unrests in the past and from various economic and financial crises that have occurred in the country. Thus, the behavior of investors participating in the stock market during this period is uncertain. On the one hand, they could share information and make joint decisions to protect their investments. On the other hand, they could, based on the information available, make individual decisions trying to safeguard their assets. Therefore, and finally considering that there is no evidence regarding how civil unrest impacts herding behavior, the following alternative hypothesis is proposed: H1a: Civil unrest causes herding behavior in Chile. H1b: Civil unrest causes reverse herding behavior in Chile. Data The data correspond to the series of daily prices and volumes traded by the companies that make up the General Index of Stock Prices of the Santiago Stock Exchange (S&P/CL IGPA) from January 1, 2010 to February 04, 2020. Companies that during this time were delisted or that did not have data since the beginning of the selected period were not considered. Data from 2010 were considered to observe herding behavior over a long period and to isolate the effect of the subprime crisis. Likewise, data were considered until February 4, 2020 to isolate a potential effect of COVID-19 in Chile, where the first case of contagion was reported on March 05, 2020. Thus, 95 companies and 2,241 observations make up the final sample. Specifically, the research focuses on determining the effect of civil unrest on herding behavior in the short term considering as reference date October 19, 2019 (State of Emergency declaration in the capital of Chile, Santiago) onwards. Different robustness tests are performed around this date. Table 1 reports information from the sample. Overall, the sample represents 46% of the companies listed on the Santiago Stock Exchange and 75% of the volume traded in the stock market in 2019. The Financial and Investments sector groups the largest number of companies in the sample, followed by the Industry and Services sector (General and Utilities). These three sectors contain 50% of the sample companies and 55% of the volumes traded during 2019. Considering that herding behavior has been reported in the services sector, specifically in the Pension Fund Administrators (Bravo & Ruiz, 2015;Raddatz & Schmukler, 2013), and in the Financial sector (Lavin & Magner, 2014), specifically in the Mutual Funds industry, there is likely to be herding behavior in the Chilean stock market. Methodology Like Chiang and Zheng (2010) and Tan et al. (2008) the stock returns is calculated as R t ¼ 100x logðP t Þ À logðP tÀ1 À Á for firms listed on the Santiago Stock Exchange (S&P/ CL IGPA) over the period from January 01, 2010 to Febrary 04, 2020 is obtained. This frecuency is used since herding is more evident with daily data than with weekly or monthly (Tan et al., 2008). To detect herding behavior the model proposed by Chang et al. (2000) is used, which is a modification of the model proposed by Christie and Huang (1995). This method has been widely used in the financial literature (Batmunkh et al., 2020;Lao & Singh, 2011;Mobarek et al., 2014;Tan et al., 2008;Yao et al., 2014). Specifically, Chang et al. (2000) suggest using the following cross-sectional absolute deviation (CSAD) model to facilitate the detection of herding over the entire distribution of market return: where R m, t is the market return (equal-weighted average stock return) and CSADt is a measure of return dispersion calculated as: where R m, t j j and R i, t are the absolute value of market return and individual stock return of stock i, respectively. To assess the effect of Civil Unrest on herding the following specification of Equation (1) is estimated: where CSAD t is the cross-sectional absolute deviation define in Equation (2), R m, t is the market return, D CU is a dummy variable that takes the value of 1 counting from October 19, 2019 (State of Emergency declaration in the capital of Chile, Santiago) onwards and zero in any other case, Volatility is the standard deviation of market return of the last 30 days and Volume is the logarithm of the daily traded volume in the local market. Significantly negative values of c 3 and ðc 4 Þ would indicate the presence of herding after (before) of Civil Unrest. Visual exploration of the sample First, a visual examination of the data is performed. Figure 1 shows the evolution of the S&P/CL IGPA and CSAD Index during the entire period of the sample. An increase in CSAD is observed during different periods in the last decade, especially when there are strong changes in the direction of the index (green circles). Figure 2 shows both series during the civil unrest period. There is a sharp drop in the S&P/CL IGPA index from mid-October 2019 (State of emergency) which runs until mid-November 2019 (declaration for social peace and a new constitution). During this same period there is a significant increase in CSAD. This may indicate reverse herding behavior, although this is only exploratory. Descriptive statistics of sample Statistical analysis of data is now performed. Table 2 reports the descriptive statistics for the CSAD measure and the market return for the complete sample, before and since civil unrest. The mean values (0.410) and standard deviations (0.159) of CSAD are high in the total sample and higher in after civil unrest period (0.490 and 0.281 respectively). The results for the market returns are overwhelming. Civil unrest has caused a significant drop in the profitability of the Chilean market, confirming the virtual analysis. The average return for the period after civil unrest is À7.8% with a volatility that almost doubles that of the total period before civil unrest. A higher mean value of CSAD suggests significantly higher market variations across stock returns which may suggest that markets have unusual cross-sectional variations due to unexpected events (Chiang & Zheng, 2010). Civil unrest effect on herding behavior With the previous information, it is investigated whether civil unrest has an effect on herding behavior. For this the model of Equation (3) is estimated. The results are reported in Table 3. Column 1 reports the general model without control variables and column 2 includes the variables volatility (standard deviation of market return of the last 30 days) and volume (logarithm of the daily traded volume in the local market) as control variables. A positive and statistically significant (at the 1% level) coefficient c 3 ¼ 0.168 is found. The VIF (variance inflation factors) statistic does not report multicollinearity and the errors are calculated using Huber-White robust standard errors. The results are clear. After civil unrest, considering the date on which President Sebasti an Piñera decrees a State of Emergency in the country's capital, there is an inverse herding behavior. The difference between the c 3 and c 4 estimates is significant in both cases. Visual, statistical, and econometric analysis confirm that civil unrest caused a significant drop in the return of the stock market and a reverse herding behavior supporting hypothesis H1b: Civil unrest causes a reverse herding behavior in Chile. Pagan and Sossounov (2003) report that stock performs is different in bull runs and bear markets due to fear of further potential loss when the market is lower versus making a potential profit when the market is booming (Mobarek et al., 2014). In this context, Demirer and Kutan (2006) point out that the dispersions of the return of the shares are significantly higher during periods of great changes in the aggregate market index. Demirer et al. (2010) find that herding is more prominent during periods of market loss. Lao and Singh (2011) report that herding is higher when the Chinese market is down and, in turn, in India when sudden changes in market conditions occur. To asymmetric effects of market return is examined whether there is any asymmetry in herd behavior when the market is rising or falling, before and after civil unrest. The results are reported in Table 4, columns 1 and 2. Considering the above, the possible asymmetric effects of herding behavior with respect to volatility in market performance are examined. It is considered high volatility when the observed volatility becomes higher than the moving average of volatility over the previous 30 days and low volatility when it does not exceed the moving average over the same period (Chang et al., 2000). The volatility is calculated as the Considering the above, the possible asymmetric effects of herding behavior with respect to domestic market trading volume is examined. High domestic market trading volume is considered when the observed volume is higher than the moving average of volume trading over the previous 30 days and low volume when it does not exceed the moving average over the same period. The results are reported in Table 4, columns 5 and 6. In summary, Table 4 reports the results of estimating Equation (3) incorporating the different asymmetric effects previously described. Column 1 and 2 report the results considering the asymmetric effects of market return (R m,t >0 and R m,t <0 respectively). Column 3 and 4 show the results considering high and low volatility states (rHIGH > rMAt-30 and rHIGH < rMAt-30 respectively). Column 5 and 6 report the results considering asymmetric effects of high and low domestic market trading volume (volHIGH > volMAt-30 and volLOW < volMAt-30 respectively). Statistically significant inverse herding behavior (c 3 ) is found in all civil unrest models (except when R m,t <0) . Regarding the effect of Rm, it is observed that inverse herding behavior is greater when the market is on the rise. In the case of volatility states, reverse herding behavior is stronger in the case of low volatility. Finally, in the case of traded volume, inverse herding behavior is stronger when the traded volumes are lower. In summary, civil unrest is found to cause reverse herding behavior in the Chilean stock market; second, inverse herding behavior is asymmetric in this period. And third, this behavior is stronger when the market is on the rise, with low volatility and low trading volumes. Rolling estimation An asymmetric inverse herding behavior infers that this behavior varies under certain market conditions and could also vary over time. In this context, the contrary results reported by Chiang and Zheng (2010) and Chang and Lin (2015) for Chile confirm that herding behavior is sensitive to the selected period. In fact, Chiang and Zheng (2010) reported herding behavior inverse in the Chilean stock market during the period 1989-2009 while Chang and Lin (2015) reported herding behavior for the period 1989-2011. Thus, for a long period of time it is possible that in the same market there will be herding behavior and reverse herding behavior. To investigate this point and observe the herding behavior dynamics from the beginning of civil unrest in Chile, the process of rolling window regression methodology to detect potential time-varying parameters is applied. Specifically, the following model is estimated: The description of the variables corresponds to those indicated in Equations (1) and (3). I use a 100-day window. Errors are calculated using Huber-White robust standard errors. Interest is placed on the dynamics of c 2 during the period from October 7, 2019 (first evasion of the Santiago metro) to November 15, 2019 (Agreement for social peace and a new constitution), with special attention to October 19 (State of emergency). A statistical summary of the main results are reported in Table 5. The S&P/CL IGPA rank is over 4000 points. Similarly, the average return for this period is À12.1%. CSAD presents high and highly volatile values. Specifically, the average of c 2 is positive for the period. The models present a good fit and the individual coefficients of c 2 are statistically significant at the 5% level and less since October 21, 2019. Figure 3 shows the evolution of c 2 and S&P/CL IGPA during this period. Starting on October 7, the date on which the first mass evasion took place in the Santiago Metro and during the following days, reverse herding behavior began to increase until October 19, the day on which a State of Emergency is declared in Santiago. From this date, this behavior declined slightly until November 15, when an agreement for social peace and a new constitution were announced. Since then, reverse herding behavior has increased again, and the same trend tends to remain. This result shows that in the most critical period of civil unrest these caused a change in inverse herding behavior, first increasing it, and then experiencing a fall as an agreement was advanced to finally resume the values it had before civil unrest. Herding behavior analysis for period windows A final robustness test corresponds to estimating the model of Equation (4) in different time windows. The aim is to observe if inverse herding behavior is present in the most critical period of civil unrest and from here expand the time window to observe if this behavior is maintained or changed. If indeed civil unrest causes inverse herding behavior, a statistically significant positive coefficient c 2 is expected in the most concentrated windows to civil unrest. At the same time, as the size of the window is enlarged, it is expected that this behavior incorporates more information and, therefore, inverse herding behavior would no longer be significant or even, according to the previously reported evidence, it could be negative. In this way, a 6-time windows around October 19 was built, 2019: À30 þ 30; À60 þ 60; À90 þ 90; À120 þ 120; À150 þ 150; and À180 þ 180 days. Finally, the results are compared to those resulting from applying the same procedure around an earlier date in which market conditions were absent from civil unrest (January 02, 2019). Errors are calculated using Huber-White robust standard errors. The results are reported in Table 6. Herding behavior is found to exist around civil unrest (Panel A). Column 2 (-60 þ 60), column 3 (-90 þ 90) and column 4 (-120 þ 120) show statistically significant inverse herding behavior. As of the window (-150 þ 150) is no longer statistically significant and even when more observations are considered, herding behavior begins to appear, although this is not statistically significant. In Panel B, coefficients close to zero are observed, they are not statistically significant and without a clear pattern. In this period inverse herding behavior disappears. Discussion Herding behavior and reverse herding behavior are more complex behaviors than the financial literature has reported to date. First, Nath and Brooks (2020) as well as with Christie and Huang (1995) point out that herding behavior should appear during extreme and crisis periods. However, during civil unrest the opposite arises, a reverse herding behavior. In a financial crisis, herding behavior appears which is interpreted as if the market participants act in a "herd" to protect themselves from the potential losses that the crisis may generate. Thus, the least informed, facing an uncertain scenario, follow the behavior of the most informed. On the contrary, in a social crisis, specifically under civil unrest, inverse herding behavior appears which represents just the opposite. That is, the agents that participate in the market make individual decisions, probably questioning the decisions made by the most informed, and discriminating the available information to protect their assets. Second, the results show that herding behavior does not always occur in emerging economies and leaves some doubt that this behavior is relatively stronger in emerging markets compared to developed markets (Ind ars et al., 2019). Indeed, on the one hand, Chiang and Zheng (2010) had already reported reverse herding behavior in Chile; and on the other hand, the inverse herding/herding behavior dynamics can vary over a relatively short time horizon. Taking into account that the transparency of a market is the consequence of a set of measures, regulations, and interactions, it is being triggered and incorporated into the market in a longer-term horizon. However, reverse herding/herding behavior has variations not only on long time horizons but also on short horizons. The literature is not conclusive when considering herding behavior over time, with which we cannot affirm that this behavior is relatively stronger in emerging markets compared to developed markets. Third, the literature on civil unrest has reported that this arises in countries with relative economic backwardness (Brunnschweiler & Lujala, 2019) and weak economic conditions (Collier & Hoeffler, 2004;Elbadawi & Sambanis, 2002). In the last two decades, Chile has been highlighted as a solid economy that has a well-developed capital market, with a significant number of companies participating in it, as well it has low levels of corruption and a good judicial system that, among other characteristics, makes institutions operate relatively effectively. So, how is a civilian unrest of the magnitude of OCT-19 even possible? Power (2018) provides an answer from the perspective of relative deprivation theory documenting the existence of a gap between expectations and lived experiences. This dissociation can lead certain people to put aside objective information and subjective aspects emerge that may lead to social discontent and those who participate in the stock market to reverse herding behavior. The deeper analysis on this point is proposed as a future line of research. Fourth, civil unrest is more likely in economies that are more based on the extraction and production of commodities (Collier & Hoeffler, 2005;Dunning, 2005) and in countries where the excess risk increases (Chapman & Reinhardt, 2013), among other factors. In turn, the freedom of foreign movement and the existence of economic opportunities abroad (Weinberg & Bakker, 2015) as well as an increase in foreign direct investment (Kibria et al., 2020) reduce civil unrest. These factors, among others, could be triggers of civil unrest and, therefore, of the results presented in this article. However, the analysis reported show that inverse herding/herding behavior has variations not only in long time horizons but also in short time horizons with which more long-term elements such as those described could hardly provoke changes in short duration herding. In summary, herding behavior and reverse herding behavior during civil unrest are complex and new behaviors in the literature, therefore more research is required to have a better understanding of how, for example, civil social interactions are related to social interactions in financial markets where there seems to be a dissociation between these interactions, which could be moderated by the value that each individual assigns to their well-being (wealth). Conclusion This article investigates how civil unrest impact herding behavior in an emerging economy like Chile. Using series of prices and daily traded volumes of the companies that make up the General Index of Stock Prices of the Santiago Stock Exchange (S&P/CL IGPA) from January 1, 2010 to February 04, 2020 civil unrest is found to cause reverse herding behavior in the Chilean stock market. Three different robustness tests support the findings. This article offers a new area of research to better understand how civil social interactions relate to social interactions in financial markets. Various dimensions and future lines of research arise in this context, such as knowing whether this behavior is similar in countries with a different culture to that of Chile, as countries with a Confucian, Muslim, or other culture. This work also aims at quantifying the determinants of civil unrest to identify how the stock market internalizes it as well as observing different origins of types of civil unrest to have a more complete picture on its extent and effect on the stock markets, among others. Additionally, this work contributes to the social-political literature by reporting that civil unrest not only arises in countries with relative economic backwardness (Brunnschweiler & Lujala, 2019) and weak economic conditions (Collier & Hoeffler, 2004;Elbadawi & Sambanis, 2002). Likewise, it shows how civil unrest has a direct consequence in the stock markets, thus inviting researchers to explore other dimensions that may receive an impact. Note 1. Command: TS¼ (("civil unrest à ") AND ("stock market à " OR "capital market à " OR "capital markets à " OR "financial markets à ")). Disclosure statement No potential conflict of interest was reported by the author.
8,141
2021-08-11T00:00:00.000
[ "Economics" ]
Motion-resolved quantitative phase imaging : The temporal resolution of quantitative phase imaging with Differential Phase Contrast (DPC) is limited by the requirement for multiple illumination-encoded measurements. This inhibits imaging of fast-moving samples. We present a computational approach to model and correct for non-rigid sample motion during the DPC acquisition in order to improve temporal resolution to that of a single-shot method and enable imaging of motion dynamics at the framerate of the sensor. Our method relies on the addition of a simultaneously-acquired color-multiplexed reference signal to enable non-rigid registration of measurements prior to phase retrieval. We show experimental results where we reduce motion blur from fast-moving live biological samples. Introduction Quantitative phase imaging (QPI) [1][2][3][4][5][6][7] enables stain-free and label-free imaging of transparent biological samples in vitro [8,9]. Unlike non-quantitative phase contrast techniques (e.g. Zernike Phase Contrast [10], Differential Interference Contrast (DIC) [11]), QPI methods are able to separate out the effects of phase and absorption. However, this generally comes at a cost of lost temporal or spatial resolution due to the need for multiple measurements. Here, we implement QPI without sacrificing speed or resolution, for the specific case of coded-illumination QPI. Quantitative Differential Phase Contrast (DPC) [3,4,12,13] recovers the complex transmittance function of a sample from several coded-illumination measurements and a phase retrieval optimization. DPC achieves spatial resolution corresponding to twice the coherent diffraction limit and is practically implemented with an LED array based coded-illumination source on a commercial microscope ( Fig. 1(a)) [3,14]. Traditional DPC measurements consist of 4 intensity images, each captured with a half-circle illumination pattern at a different rotation angle ( Fig. 1(b)). The time-multiplexed nature of the measurements requires the implicit assumption that the sample is not moving during the acquisition. Of course, live biological samples may be non-stationary (defined as moving more than one pixel during the acquisition time). When only a single measurement is required (single-shot), the exposure time of the sensor can be scaled to guarantee approximately stationary behavior; however, for multi-shot methods, the acquisition time is limited by the sensor readout time. While each individual measurement may have an appropriate exposure time to guarantee the stationary assumption, motion occurring between measurements during the multi-shot DPC acquisition will cause errors in the reconstructed complex-field. Interferometry-based QPI techniques such as digital holographic microscopy [5] and white light diffraction phase microscopy [6] can be single-shot, but are limited in spatial resolution by the coherent diffraction limit and are sensitive to system imperfections that cause speckle. Transport of intensity equation based QPI techniques [7] can be single-shot if the chromatic aberrations of the system are great enough [15] or with additional camera hardware [16]. DIC-based QPI techniques [17] can be single-shot with the addition of specialized hardware. Other methods rely on simultaneously acquiring multiple measurements via color multiplexing [18][19][20], polarization Traditional four-image DPC acquisition with rotating half-circle sources. Because the polystyrene bead is moving, the reconstructed phase suffers from motion blur artifacts. (c) Our method, mrDPC, uses traditional DPC source patterns in the green color channel and an additional constant navigator source pattern (half-circle) in the red channel. The motion-resolved phase reconstructed corrects the effects of the sample's non-rigid motion. multiplexing [21,22], or spatial multiplexing [23]. In the case of color multiplexing, the implicit assumption is made that the sample has no chromatic dispersion and is colorless; and in the case of polarization multiplexing the implicit assumption is made that the sample is not birefringent, both of which may be difficult to guarantee when imaging biological samples. Finally, in the case of spatial multiplexing, the space-bandwidth product of the reconstructed phase will be limited by the division of the sensor into smaller non-overlapping segments. Here, we demonstrate that non-rigid sample motion occurring between the frames of a multishot DPC acquisition can be estimated and corrected. Techniques for rigid and non-rigid motion estimation and correction have been comprehensively applied in other fields (e.g. magnetic resonance imaging [24,25], multi-frame image enhancement [26], remote sensing [27], computer vision [28,29]), but not in QPI microscopy. It is not straightforward to apply these existing methods to DPC because they make an assumption that the spatial frequency content between any two images being registered is similar [30]. This assumption is violated when estimating the motion between DPC measurements, since each coded-illumination measurement has a unique spatial frequency contrast of the sample's optical phase. Thus, estimation of motion between raw DPC measurements will fail when using traditional registration techniques. In order to perform motion estimation for DPC images, we introduce a new method, termed motion-resolved DPC (mrDPC), that uses an additional simultaneously-acquired color-multiplexed measurement with a constant coded-illumination pattern ( Fig. 1(c)). This navigator measurement uses one color channel of the source LEDs to display a constant illumination pattern (a half-circle), thus maintaining constant spatial frequency contrast. A color camera then separates the navigator and DPC measurements (without any assumptions regarding the dispersion or color of the sample). Non-rigid motion can then be estimated from the navigator measurements and corrected in the DPC measurements prior to phase retrieval. In this way, quantitative phase images can be recovered for each time point of the captured data, resulting in temporal resolution equivalent to single-shot methods. We demonstrate proof of principle experimental results in which blurring due to live sample motion (Amoeba proteus and Caenorhabditis elegans) is reduced. Methods Our proposed method, mrDPC, captures each DPC measurement sequentially, while simultaneously capturing a color-multiplexed navigator, using an LED array microscope. We program the green color channel of the LED array with rotating half-circle patterns (for DPC) and the red color channel with a constant half-circle pattern (for the navigator), illustrated in Fig. 2(a). The signal is measured on a color camera and separated via demosaiking with a precalibrated spectral sensitivity matrix (see App. color multiplexing) into DPC measurements and navigator measurements ( Fig. 2(a)). To achieve motion correction between the four DPC measurements, we need to register each image to the others. Because the different DPC illumination patterns result in different contrast, they cannot be directly registered to each other. The navigator measurements circumvent this problem; they can be registered to each other and the resulting motion estimates can then be applied to the DPC measurements. Specifically, we estimate the motion between three pairs of measurements (between t 0 and t 2 , t 1 and t 2 , and t 3 and t 2 ) as outlined in Sec. 2.1. The reference time point can be any of the four measurement time points; we chose T = t 2 . The motion estimates are plotted as vector fields in Fig. 2(b), where the arrows' magnitude corresponds to the amount of the sample's local displacement and the arrows' orientation corresponds to the direction of the sample's local displacement. The three motion estimates are applied to the raw DPC measurements at times t 0 , t 1 , t 3 , respectively, to register them to the reference measurement. The registration is performed by resampling with linear interpolation. Using the physics model in Sec. 2.2, we then linearly deconvolve the motion-corrected DPC measurements (Eq. 7) to recover the sample's absorption and optical phase (Fig. 2(c)). Motion estimation The task of removing motion artifacts can be formulated as a blind deconvolution problem [31,32] where the unblurred image and the blur kernel are jointly estimated; however, this does not account for non-rigid motion. In this work, we use our navigator measurements to correct for the sample's non-rigid motion via image registration, enabling a wider array of biological applications. To model non-rigid sample motion, our proposed method estimates a deformable mapping between pairs of images, for which there exist many algorithms [28,29,[33][34][35]. We chose the Symmetric Normalization (SyN) method [33,34] for its state-of-the-art performance [36] and open-source availability [37]. The method is called symmetric because it is commutative with respect to the ordering of the two input images and therefore does not over-fit the deformable mapping estimate to either image. This is particularly important to us, so that we can arbitrarily choose the reference time point without biasing our results. The SyN algorithm solves an optimization problem (Eq. 1) to estimate the deformable mapping between two images, I 0 (r) and I 1 (r), such that a similarity metric, S(I 0 , I 1 ), is maximized and the deformable mapping is spatially smooth. The deformable mapping, g(r, t), is a function of space and time, where r denotes 2D spatial coordinates and t ∈ [0, 1] denotes a dimensionless time coordinate. At time t = 0, g(r, 0) maps I 0 (r) to itself, while at time t = 1, g(r, 1) maps I 0 (r) to I 1 (r). The SyN method achieves symmetry by jointly estimating a forward deformable mapping g 0 (r, t) between I 0 and I 1 and a backwards deformable mapping, g 1 (r, t), between I 1 and I 0 . Mathematically, the algorithm can be written as: max g 0 (r,t),g 1 (r,t) S(I 0 (g 0 (r, 0.5)), subject to, where our similarity metric is the normalized cross-correlation, defined as S(I a , I b ) = I a ,I b I a I b . The mappings' spatial smoothness is achieved by penalizing the term R(v(r, t)) = ∫ 0.5 t=0 Lv(r, t) 2 dt for each map. Here, L = ∇ 2 + I is the linear differential operator, where ∇ is the first-order difference, I is the identity operator and v(r, t) is the velocity field corresponding to g(r, t). This correspondence is enforced with the Lagrangian-Euler constraint [38] in Eq. 2. The SyN optimization is solved via gradient descent [33,34] and implemented in Dipy [37]. Phase retrieval After using the navigator measurements to estimate the motion, we correct for motion in the DPC measurements, which can then be used as input to a phase retrieval algorithm. Generally, the relationship between an object's 2D complex transmittance function and measured intensity is non-linear, so recovery of phase requires non-linear optimization and an iterative solver. For in vitro biological samples, the "scatter-scatter" term is small and so we can make a weak object approximation, thus enabling phase recovery by simple linear deconvolution with weak object transfer functions (WOTFs) [3,4,7,39]. This linearization decouples the contributions of absorption and phase and allows us to express intensity in terms of linear contributions from: background, absorption and phase contrast. In the Fourier domain, where y is the intensity measurement and B is the DC term. Here,· denotes Fourier transform and u are 2D spatial frequency coordinates. H µ (u) is the WOTF for the sample's absorption and H φ (u) is the WOTF for the sample's phase. These terms are derived in [3], where P(u) is the complex pupil function, S(u) is the illumination source distribution, and denotes cross-correlation. In traditional DPC, the illumination sources are four rotating half-circles with radius N A ob j oriented right, bottom, left, and top (see Fig. 1(b)). Here, λ µ and λ φ are regularization parameters, which are set to trade off data consistency and penalties for the low-frequencies in φ and the high-frequencies in µ. The necessity of the regularization comes from the phase WOTF's low-sensitivity to low-frequencies and absorption WOTF's low-sensitivity to high-frequencies. The optimization in Eq. 7 can be reformulated as a least-squares problem, to yield a closed form solution for the motion-resolved absorption,μ * , and motion-resolved quantitative phase images,φ * (· denotes the conjugate operator): Validating our method's motion correction ability is challenging because of the difficulty in obtaining ground truth phase for comparison. To address this, we start by capturing full framerate videos (50×, NA=0.55) of a slow-moving sample Amoeba proteus (Carolina Biological Supply) and assume the traditional DPC reconstruction to be ground truth, since we expect negligible motion between frames. We then decimate the dataset in time by a factor of 8× to emulate faster motion, reconstruct with our method and traditional DPC, and compare results to the ground truth DPC result. As can be seen in Fig. 3(a), the bright water vacuoles and well-defined wall edges in the amoeba's nucleus and contractile vacuole (gold arrows in Fig. 3) appear blurred in the time-decimated traditional DPC reconstruction, but not in our method's reconstruction. We next demonstrate our method with an even faster sample, live C. elegans (12.5×, NA=0.25), which generates significant non-rigid motion between the raw measurements ( Fig. 4(a)). From these measurements, we reconstruct and compare traditional DPC with our mrDPC method ( Fig. 4(b)). Insets highlight mrDPC's correction of distortion around the head region (pink insets in Fig. 4(c)) and blurring of internal features (blue insets in Fig. 4(c)). By capturing a continuous video at the full framerate of the sensor, we can reveal biological motion dynamics of the C. Elegans. Reconstructions are performed on a sliding window of measurements such that each window has a full set of four DPC measurements and each window is offset by one measurement. The motion-resolved absorption and quantitative phase video reconstructions are compared with traditional DPC in Supplementary Material Visualization 1. Finally, we compare our method to color-multiplexed DPC [18] (colorDPC), which is a single shot QPI method (Fig. 5). ColorDPC is able to encode the information required for reconstruction into a single measurement using color multiplexing of the RGB LEDs and a color camera, under the assumption that the sample is non-dispersive (an assumption which is not required for mrDPC). Since colorDPC is a single-shot method, it will not suffer inter-frame motion blur, but uses fewer measurements than mrDPC and traditional DPC for each reconstruction and thus will have lower reconstruction SNR. In addition, the three color-encoded measurements for colorDPC will have different bandwidths, each defined by their respective encoding wavelength. As a result, the final reconstruction has orientation-varying high-frequency contrast, while traditional DPC and mrDPC do not (highlighted in Fig. 5). Discussion Our proposed method, mrDPC, can correct artifacts due to sample motion that is fast enough to cause motion blur across the four captured DPC images, but does not cause motion blur within each measurement. In the case of Amoeba proteus, the sample motion is slow enough over the duration of the multi-shot DPC acquisition that it can be assumed stationary and no motion correction is necessary. In the case of C. elegans, the stationarity assumption is violated, but each individual captured image is unblurred, so mrDPC helps to resolve the motion between frames. Most generally, the sample is non-stationary in each single measurement and motion-induced blur is present. To address this, strobed illumination could be used to effectively shorten the capture time of each measurement to ensure stationarity. This strategy is analogous to our time-decimated validation in Sec. 3 (measurements acquired with delays between them). Design of the navigator pattern also affects performance. We make the assumption that the structural motion between the navigator measurements is the same motion that occurs between DPC measurements. This should hold true, since the navigator measurement is acquired simultaneously with its corresponding DPC measurement and its illumination is similar to the DPC illumination pattern in terms of bandwidth. This ensures that the highest resolution features' motion in the DPC measurements will be captured in the navigator measurements. Further, the navigator measurement must have sufficient SNR and gradient information [40] to perform motion estimation. While SNR isn't rigorously measured here, the power and number of LEDs in the navigator pattern is equal to that of the DPC pattern so that when spectrally unmixed neither contribute much additional noise to each others' measurements. The work of Phillips et al. [18] discusses that only three half-circle coded-illumination measurements are required to perform the quantitative DPC reconstruction. We could incorporate this by only performing our method on three rather than four measurements; however, performance might degrade similar to in Fig. 5c. Our method is not limited to a fixed number of measurements; if more measurements give improved reconstruction results (e.g. increased SNR), we can incorporate them simply by registering additional measurements to the reference. One limitation of the present method is that of using the weak object approximation, which only applies to samples with relatively weak phase and absorption. Since the motion correction is independent of the phase retrieval method, nonlinear methods can be used when the weak object approximation is violated. In that case, the linear reconstruction can serve as a good initialization to a non-linear phase retrieval optimization [4]. Conclusion We present a computational method, motion-resolved DPC, that achieves similar reconstruction quality to that of traditional DPC's quantitative phase images, but corrects for the blurring caused by sample motion during the four-image acquisition. Validation of our method's navigator-based non-rigid motion estimation and correction of live Amoeba proteus sample motion is performed. Furthermore, we motion resolve even faster live C. elegans and reveal motion dynamics at the frame rate of the camera with video reconstruction. Appendix: color multiplexing Color sensors with a traditional Bayer filter [41] spatially multiplex color filters to capture spatial-spectral information. However, these color filters are not perfectly selective to a single spectra, but rather are sensitive to overlapping distributions of spectra. This cross-talk makes it necessary to calibrate the pixels' sensitivity relative to the spectrum of the illumination source, so that the desired spectral response can be demixed from the acquired measurements. For our method, we estimate the spectral sensitivity of our sensor's red and green pixels to our illumination source's red (625nm) and green (532nm) LEDs. This is accomplished by spatially averaging each of the color channels' intensity response to red-only and green-only illumination [18]. The results form the entries in a matrix, C, that can be used by applying its pseudo inverse, C † , to unmix future color multiplexed measurements. Here, we use the green pixels (I g1 , I g2 ) and green LEDs to encode the DPC signal, I dpc , and we use the red pixels (I r ) and red LEDs to encode the navigator signal, I nav .
4,181.8
2018-10-15T00:00:00.000
[ "Physics" ]
Complex-frequency waves: beat loss and win sensitivity Recent experiments have demonstrated that synthesized complex-frequency waves can impart a virtual gain to molecule sensing systems, which can effectively restore information lost due to intrinsic molecular damping. The enhancement notably amplifies the signal of trace molecular vibrational fingerprints, thereby substantially improving the upper limit of sensitivity. metallic inclusions has been hindered towards wide applications for a couple of decades due to large metal loss 9,10 .The CFW is quite promising to compensate for the loss and empower the superlens applications, nevertheless, there are still challenges in experimentally realizing CFW in the time domain.In a recent groundbreaking development 11 , Guan et al. have successfully addressed these issues by synthesizing truncated CFW across multiple frequencies.Their method involves treating the CFW as a coherent amalgamation of several real frequency waves.By measuring the optical response at various real frequencies and adhering to the Lorentzian lineshape, they recombine responses from different frequencies.The process culminates in the numerical synthesis of the optical response under complex frequency excitation.The innovative multi-frequency synthetic approach to truncated CFW introduces virtual gains into superlens imaging 11 and surface plasmon polaritons propagation 12 , effectively overcoming the metal or plasmonic losses of systems. As for trace molecule detection, the intrinsic damping loss in molecular materials significantly diminishes the interaction between molecular vibrational modes and plasmons.Specifically, the intrinsic damping broadens the vibrational spectrum of trace molecules, consequently reducing the signal-to-noise ratio of their fingerprint signals.Such a scenario poses a challenge for the accurate detection of trace molecules.To counteract the issue, the application of virtual gain provides an ideal and feasible solution.The synthesis of CFW has thus been identified as a promising approach to enhance the sensitivity in trace molecule sensing. In a recently published paper in eLight, a collaborative team led by Prof. Shuang Zhang from the University of Hong Kong, Prof. Qing Dai from the National Center for Nanoscience and Technology, along with Prof. Na Liu from the University of Stuttgart, has unveiled a method for ultrahigh-sensitive molecular sensing 13 .The method is based on the application of synthesized complexfrequency excitation.The researchers constructed a complex frequency excitation from multiple real frequency responses with temporal truncated measurements.Here, the time truncation function is crucial in preventing energy divergence.Moreover, the sidebands resulting from time truncation are effectively eliminated through time averaging.The electric field E T ðt 0 Þ can be expanded as 1 iðωÀωÞ e Àiωt 0 dω.Naturally, the response in a quasi-steady state, under truncated CFW excitation, can be coherently synthesized from discrete real-frequency responses across a sufficiently broad spectral range.The final expression for the response under complex frequency excitation is denoted as FðωÞ % P n Fðω n Þ 1 iðωÀω n Þ e iðωÀω n Þt 0 Δω=2π, where Fðω n Þ represents the response at the real frequency.Note that both amplitude and phase information of Fðω n Þ are essential.The phase component can be determined using the Kramers-Kronig relation for extraction 14 . Figure 1 displays a comparative illustration of the current challenges and advancements in molecular sensing using graphene plasmons (GP) 15,16 .Figure 1a shows that while GP can enhance the interaction between light and molecules, the resulting signal in the extinction spectra of thin molecular layers remains notably weak.The phenomenon can be understood in terms of coupled harmonic oscillators 17 .Plasmon−phonon coupling generates two new hybrid modes, whose splitting distance depends on their coupling strength.At low concentrations, the intrinsic damping leads to a notably weak coupling strength between the plasmon and phonon, and the linewidth of the hybrid mode exceeds the splitting distance.It results in a substantial overlap between the two hybrid mode peaks, thereby obscuring subtle features in the extinction spectra.CFW can overcome the intrinsic damping loss, effectively restoring the molecular tiny responses.As demonstrated in Fig. 1b through numerical calculation, the application of synthesized CFW significantly amplifies the initially weak molecular vibrational response, showcasing a remarkable enhancement in detection sensitivity. The work demonstrates the remarkable capability of synthesized CFW to significantly enhance molecular characteristic signals, thereby elevating the sensitivity ceiling of various sensors across diverse experimental contexts.This includes scenarios such as detecting deoxynivalenol molecules without plasmonic enhancement, as well as measuring silk protein molecules and bovine serum albumin protein solutions using graphene-based plasmonic sensors.The scalability and versatility of the synthesized CFW methodology hold immense promise for advancing the study of light-matter interactions.The breakthrough has the potential to unlock a wide array of applications in fields ranging from bio-detection and optical spectroscopy to biomedicine and pharmaceutical science, particularly within the realm of terahertz time-domain spectroscopy. Fig. 1 Fig. 1 Illustration of damping compensation for sensing enhancement through synthesized CFW. a The extinction spectrum of the molecular layer enhanced by GP at real frequency.b The extinction spectrum of the molecular layer enhanced by GP at CFW
1,113
2024-02-01T00:00:00.000
[ "Chemistry", "Physics" ]
The initial evaluation of performance of hard anti-wear coatings deposited on metallic substrates: thickness, mechanical properties and adhesion measurements – a brief review Abstract The demand for metallic materials of exceptional tribological performance is rapidly growing due to constant development of automotive, tool or implant industry. As great attention nowadays is being paid to the surfacemodified materials, there is a need of establishing reliable methods for evaluation of coated metallic alloys. In this paper, methods for thickness, adhesion and mechanical properties measurement of anti-wear coatings are considered. The importance of altering the mechanical properties of the coating with regards to its expected wear performance is also discussed. According to the numerous findings, it is reasonable to pay attention to the hardness and Young’s modulus of the developed coating, as well as on the mismatch between its Young’s modulus and the elastic modulus of the substrate material. Information provided in this paper has been thoroughly illustrated and enriched with our own experience in the field. Introduction process and prospective need for optimization should be considered. Introduction to the market films which have not been thoroughly tested can carry not only many extra costs, but also negative business and social consequences. Hence, newly introduced coatings are much more rigorously tested than surface modification methods that have been approved for mass production many years ago. Therefore, an extremely important element of the research and development (R&D) process of a coating is to establish its basic properties which are crucial from the view of its intended application. For example, in tribology-related applications, chemical composition and reactivity, wear performance and frictional properties of the coating are one of the key factors when the film remains under development, whereas its adhesion, thickness and repeatability of the deposition process are critical when the coating already is introduced to the market and its properties have previously been optimized. Therefore, the number and nature of planned experiments depends strongly on the scope and type of planned applications of the newly introduced coating as well as on the current stage of its development. Some of the anti-wear coatings are characterized by high versatility and can be used in seemingly non-related branches of industry, e.g. manufacturing of cutting tools or synovial joints endoprostheses. In such case, it is important to develop the most comprehensive set of tests that will allow one to predict the performance of the coating under various operating conditions, and to decide whether it is possible to use the film under planned circumstances. Moreover, the coating development process can be accelerated with the use of finite element method (FEM) simulations and mathematical models that are used to predict the mechanical properties of the new thin films and reduce time that is necessary for the prototyping process [1]. These models usually require knowledge of basic properties of the coating, e.g. its Young's modulus, Poisson's ratio, fracture toughness, hardness, thickness and adhesion to the substrate [1]. In some cases, further data processing is required, which in combination with analysis of the findings obtained during experimental work will make it possible to determine the basic properties of the coatings with sufficient accuracy. These factors are the cause of the growing need for modern coating characterization methods. The number of methods of thin films assessment is rapidly growing. Many specialized measurement methods are already available, yet the great number of new and highly sophisticated solutions is still to come. In this paper, methods for thickness, adhesion and mechanical properties measurement of anti-wear coatings are considered. The importance of designing mechanical properties of the coating with regards to its expected wear performance is also discussed. Thickness measurements In terms of tribological properties, there is a complex relationship between the thickness of the coating and its certain properties, e.g. internal stresses, adhesion to the substrate and hardness [1]. Although it seems like a simple matter to measure the thickness of the coating, it should be noted that thickness measurement can be implemented in a variety of ways: by geometric means, that is by measuring the distance between the surfaces of interphasal separation, by mass or by variety of indirect methods using the physical properties of the coating, e.g. electrical conduction or optical adsorption [2,3]. The general guidelines for thickness measurement of thin films have been included in the standard ISO 2064:2004 [4]. The thickness of the coating determined by geometric methods is expressed in meters and their multiples [1] and does not consider the composition, nor density or microstructure of the coating [2]. An advantage of direct measurements is that they are made on the cross-sections of surface modified materials observed using widely available microscopic techniques [1]. As a rule, geometric measurements of coating thickness provide satisfactory results, but if the cross-section of the deposited layer is deformed or damaged as a result of the metallographic sample preparation process, it becomes problematic to determine the thickness properly [1]. Moreover, sometimes it is difficult to establish interphase boundary between the examined surfaces. Although there is a vast variety of methods available for measuring the thickness of thin anti-wear coatings (Figure 1), the most popular techniques are microscopic measurements made on cross-sections of the specimens, profilometric methods and the calotest method [1]. In the profilometric method, thickness of the coating is defined by determining the difference in height between the bare surface of the sample and the coated area. Therefore, the deposition process has to be designed in the way that the artificial slip between the coating and the substrate is obtained [1,2]. The best results are achieved when the difference in height between the outer surface of the coating and the metallic substrate is greater than 1×10 −8 m [2]. The accuracy of the profilometric method depends on the roughness of examined surface [1,2] and its planeparallelism [2]. In the calotest method, a rotating ball of known diameter is used to determine thickness of the coating [1,5]. As a result of frictional forces generated by rotating movements of the sphere, a coating and a substantial part of the underlying substrate are removed from the surface of the specimen. A spherical cavity is obtained in the test material, whereas worn traces formed on the surface of the sample are in the shape of concentric circles ( Figure 2). To accelerate wear of the coating, abrasive agents, e.g. diamond paste can be used. In order to determine thickness of the coating, diameters of obtained traces have to be measured using a metallographic microscope. When thickness of a monolayer is measured, etchant can be used to visually reveal the coating-substrate interface [5]. The accuracy of the calotest method is estimated at ±1×10 −7 m [5]. The advantage of the calotest method is its ability to adapt for both mono-and multilayer systems. If the thin film consists of multiple interlayers, the limitation of using the calotest method is the ability to distinguish individual interlayers and their thickness, which will allow to perform measurements by microscopic methods [1]. An undoubted advantage of the calotest is the ability to measure thickness of the coating in different areas of the sample and therefore determine the uniformity of the coating. However, in many cases the geometric thickness measurements, which do not even consider porosity of the films, is insufficient. In such cases, the mass thickness of the coating can be measured [2], which is usually expressed in grams per unit area, e.g. square meters. If necessary, it is possible to change the mass thickness to geometric thickness, although it is required to have knowledge of the density of the tested coating [2]. However, techniques applied in mass thickness measurements are sensitive to the presence of pores or discontinuities in the microstructure [2]. Methods for measuring thickness that are based on physical properties of the coating are, however, susceptible to accuracy of determination of the relationship between thickness of the coating and the analysed physical property [1]. In case of the electroconductive layers which are deposited onto insulating substrates, methods based on electric resistivity are used [1]. The simplest method to measure resistivity of the coating is the two-point method, in which resistivity of the deposited layer is determined by Ohm's law as a quotient of voltage and current [7]. To determine thickness of the coating, other methods based on e.g. electromagnetic, optical or acoustic properties can be used [1]. In case of thermo-chemical treatments, e.g. nitriding or carburizing, a common method of determining the thickness of a modified layer is to measure the thickness of the hardened layer of the specimen -the case depth [8]. To investigate the effective layer thickness, a series of hardness measurements in a perpendicular direction to the coating is made on the cross section of the sample (Figure 3). Based on the graphical representation of change in hardness it is possible to determine the thickness of layer that has been hardened due to applied surface modification process [9]. Typically, case depth is the depth up to which the hardness of the modified layer is greater than hardness of the substrate by 33% or more [10]. A detailed methodology for measuring the effective thickness of a modified layer is given in the ISO 2639:2005 standard [9]. In case of the anti-wear coatings, mainly microscopic techniques are used to determine thickness of the film [11]. Though microscopic evaluation of the coating thickness is characterized by high accuracy, one should remember that the quality of the measurement depends not only on the quality of the sample itself, its preparation method or the resolution of the microscope, but also on the substrate material or even the coating that is under evaluation. An example of the coating thickness measurement taken by microscopic methods is given in Figure 4. TiN coating presented in Figure 4 was deposited on Ti6Al4V ELI alloy by PVD methods. As presented in the picture, though scanning electron microscope (SEM) allows to obtain greater magnification of the sample than the measuring laser confocal microscope (CLSM), the picture quality obtained by SEM is poor in the means of coating thickness measurement. The hardware capabilities of the tabletop SEM used to take the presented picture were insufficient in terms of establishing a sharp image of the interfacial boundary between the substrate and the film. As shown in the given examples, to obtain credible results from thickness measurements of the coatings and modified layers, it is important to choose the measuring technique properly. Since the commonly used methods can be sensitive to density, chemical composition, microstructure or the crystallographic orientation of the coating, thickness measurements of the same specimen made by different techniques may exhibit significantly different results. For this reason, the optimal solution is to adapt a research methodology that is the most popular for given coating. This is of great practical importance, because it allows researcher to compare his results with findings given in the works by other authors. Nanoindentation and its importance in wear control of hard coatings Since its introduction in 1992 [12], the Oliver's and Pharr's method for hardness and elastic modulus testing of engineering materials has attracted a lot of scientists and became one of the most popular techniques that are used to establish the mechanical properties of the materials in microscale. Although this indentation method has been initially designed for analysing bulk materials [13], the availability of high-resolution measuring equipment, as well as the ability to determine the mechanical properties of the material directly from the load-displacement curve, without the need of imagining the impression obtained as a result of the applied load, have contributed to the growing popularity of this technique in measuring the mechanical properties of the materials in nano-and microscale. Currently, the Oliver-Pharr method is used to measure the basic mechanical properties of the coatings that are at least 1 µm thick, but for comparison purposes it can be used also for the coatings with thickness of up to a few nanometres [14]. The main objective of the nanoindentation measurements is to determine the Young's modulus and hardness of the examined material based on the experimentally determined loading force and the corresponding displacement of the indenter. Schematic representation of the data collected during the nanoindentation test performed using the Berkovich indenter, the load-displacement curve, is given in Figure 5. In the classic Oliver-Pharr method, in which the triangle-based pyramid diamond tip is used -the Berkovich indenter [14] -the applied force and the displacement of the indenter are recorded from zero to a certain maximum, and after reaching the maximum load value, the load is reversed and reduced at a specific pace from maximum to zero [15]. In contrast to conventional hardness measurement techniques, the obtained impression is often not big enough to measure its externals using optical microscopy methods. Therefore, the projected contact area is determined based on the indenter displacement. Given the vertical displacement of an indenter and the tip geometry it is possible to indirectly measure the contact area A that is obtained when applying maximum normal load Fmax [14]: where hc is the depth along which contact is made between the indenter and the specimen, as calculated from [14]: It is estimated that hmax is the maximum displacement of the indenter under load Fmax, while hs is the elastic deflection of the material in the contact periphery (sink-in) [16]. The hs is given by [14]: where ϵ is a constant that depends on the geometry of the indenter and S is the slope of the elastic unloading curve (dF/dh). For Berkovich indenter, the ϵ value always equals 0.75 [12,14,15]. The choice of this value is justified by the impact of the elastic substrate on the effective contact area of the indenter-examined material interface [17]. Knowing the value of the contact area A, hardness of the analysed material can be given as [14]: Note that hardness of the analysed material is calculated based on the contact area A that is obtained under maximum load Fmax. It is of high importance especially in the case of elastic materials, because there can be statistically significant difference between their hardness determined by nanoindentation method, using the loaddisplacement curve, and the hardness established using traditional methods that are based on measurements of externals of the impressions. For elastic materials, the elastic recovery during unloading can affect significantly the area of the residual hardness impression. Nevertheless, it is claimed that the elastic recovery effect on the findings should be taken into account only for materials characterized by extremely small E/H values [14]. In practice, when the Berkovich indenter is used, the projected contact area A is given by [15]: After simplifying the equation (5) and taking into account the tip half angle θ = 65.27 ∘ , A can be calculated from [15]: while the hardness H can be expressed as [15]: The original Berkovich indenter with face angle of 65.03 ∘ was designed in such a way that in the conventional hardness measurements its contact area to the impression depth ratio was the same as in the Vickers hardness measurements. However, in the nanohardness measurements, the measured hardness is linked to the stress applied under Fmax. Due to that, the measured hardness value cannot be directly converted to the Vickers scale. Therefore, for the nanohardness measurement purposes, the face angle of the indenter was slightly modified and the angle of θ = 65.27 ∘ was assumed as the correct face angle of the Berkovich indenter [15]. Assuming that during unloading of the material only elastic deformation occurs, Sneddon [18] developed a relationship between the displacement h and the load F in the region of contact between the axisymmetrical measuring tip and the material that is being tested. Continuing his work, Pharr et al. [19] as well as Oliver and Pharr [14] proposed the following relationship between the reduced Young's modulus E and S, defined as the tangent to the upper part of the unloading curve: where β is a dimensionless geometric correction factor. Its purpose is to take into account the stiffness deviation during loading that results from the use of the pyramidshaped indenters. In the original hardness and Young's modulus measurement technique, the value of β correction factor equalled 1. Nevertheless, multiple papers published by other authors showed that, depending on the reason, the actual β value that should be used during measurements varies between 1.02 and 1.08 [15]. Nowadays, the most commonly chosen value of β correction factor is 1.034, as proposed by King [20]. The reduced Young's modulus value E can be expressed by the following relationship [14]: The following formula takes into account the fact that elastic deformations take place not only in the analysed material sample, which is characterized by its Young's modulus Es and Poisson's ratio νs, but also in the diamond tip with the elastic modulus E i = 1140 GPa [14,21] and Poisson's ratio ν i = 0,07 [14,21]. Although equation (9) was developed for purely elastically deforming materials [19], further studies have shown that it can be successfully applied in elastic-plastic materials, while the slight deviations from the axisymmetry of the indenter have no significant effect on the obtained findings [14]. According to ISO 14577:2015 [22], the mechanical properties evaluation can be considered as done in the nanoscale if the vertical displacement of the tip during dwelling does not exceed 2×10-7 m (0.2 µm). Moreover, it is assumed that in order to reduce measurement uncertainty that arises from the surface roughness below 5%, the displacement of the indenter h should be at least 20 times the Ra value. However, the basic difficulty in nanoindentation measurements results from the fact that the maximum displacement of the tip, hmax, should not exceed 10% of the examined coating's thickness [15]. It is believed that the greater is the vertical displacement of the diamond tip in the coating, the bigger is the influence of the substrate on the obtained findings [13]. As an effect, the measurement can be distorted by the substrate influence. Nevertheless, as the modern protective coatings are becoming extra-thin, the 10% rule can be applied for a limited number of cases. Due to that, numerous modifications of the classic Oliver-Pharr method have been developed [23]. However, Ni and Cheng [24] as well as Solis et al. [25] have shown that the 10% rule can be applied also when measuring mechanical properties of the hard coatings that were deposited on relatively soft substrates. On the other hand, FEM (finite element method) calculations have shown that for soft coatings deposited on hard substrates, the discussed safe value can be increased even up to 30% [15]. In general, it is assumed that at lower indentation depths, an increase in hardness can be initially observed, which is associated with the elastic deformations of the indenter and its bluntening [15]. After performing a sufficiently great number of measurements with the increasing load value and plotting the hardness-depth graph, part of the plotted curve should represent plateau, which corresponds with the actual hardness of the examined coating [26]. Decrease or increase in hardness that follows the plateau indicates that the influence of the substrate is observed [15,25]. Therefore, the measured hardness of the coating is the average hardness taken from the plateau region. Though hardness is considered as one of the main factors that determine the wear performance of most engineering materials, identification of deformation mechanism of the coating under load as well as determination of its Young's modulus value are crucial for establishing anti-wear performance of a ceramic coating. If hardness and elastic modulus of the film are known, it is possible to determine its strain to failure, denoted as hardness to elastic modulus ratio H/E [27]. The H/E quotient originated from the plasticity index Ψ, which was devised for static contacts to estimate the surface roughness at which no plastic deformation of the roughness asperities is present [28]. The plasticity index Ψ is given by equation [28]: where E is the reduced Young's modulus, H 1 is the hardness of the softer material, R is the typical radius of a surface asperity and σ is the standard deviation of the roughness asperity height distribution. It is concluded that when Ψ > 1, the surface asperities are plastically deformed [29], but when the index Ψ < 0.6, elastic deformation of the asperities only takes place. Inverting the E/H ratio gives H/E, where the greater the H/E ratio is, the more elastic is the material in rough contacts [29]. When the considered coating is of high hardness, H/E value of approx. 0.1 is the indicator of high wear performance of the modified surface [30]. The prognosis of the coating wear resistance based on its H/E ratio can be justified by the fact that the tribological wear of materials is caused mainly by their plastic deformation [31]. Studies conducted by the other authors show that the H/E ratio is proportional to the elastic (reversible) work We to total work W tot ratio, namely H/E ∼ We/W tot [31]. As a consequence, if two materials are deformed, a greater fraction of work is consumed for plastic deformations by the material of small H/E [31]. What is more, the plastic deformation observed during nanoindentation tests, denoted as the hp to hmax ratio, is associated with the strain to failure ratio, as given in equation [32]: According to equation 3.11, the greater is the plastic deformation of the material, the more severe wear is expected. Due to that, greater wear is observed if the coating has a low H/E ratio [31]. Another important parameter that enables to describe wear properties of surface modified materials accurately is the plasticity index, given as H 3 /E 2 [25]. It is believed that in sliding contact, wear of materials is intensified by material deformation in the coating/substrate interface, while the high H 3 /E 2 value ensures satisfactory coating strength [30]. The Py load, which causes plastic deformation of the rigid ball in the tribological contact with the plastic-elastic coating, is highly dependent on the hardness H and the reduced Young's modulus E of the coating: where r is the radius of the sphere that is in contact with the flat surface. Due to that, the H 3 /E 2 ratio is an important criterion of the coating resistance to plastic deformation in ball-on-flat or ball-on-disc contact. High value of the H 3 /E 2 ratio is associated also with coating's resistance to brittle fracture, as presented in the work by Musil and Jirout [33]. In general, in most of the engineering applications, the greater are the H/E and H 3 /E 2 ratios presented by the coating, the better should be its wear performance [25,27]. Low Young's modulus is favourable especially if it is possible to adapt it to the elastic modulus of the underlying surface. Low value of Young's modulus promotes uniform stress distribution at the coating-substrate interface, what allows to reduce wear of the considered material [34]. However, adjusting the stiffness of the components is particularly difficult when depositing hard ceramic coatings, which are typically characterized by the three to four times greater Young's modulus than the metallic substrate that is supposed to be protected, e.g. steel [27]. Due to that, some of the authors claim that the E coating /E substrate ratio represents better the anti-wear capacity of surface modified materials than the aforementioned ratios. According to Musil and Jirout [33], the E coating /E substrate ratio is associated with the susceptibility of the hard and superhard coatings to cracking. As stated by the authors, if the ratio of the hard coating's Young modulus to the soft substrate's elastic modulus does not exceed 1.3, normal load causes not only plastic deformation of the coating, but also its brittle cracking [33]. According to the study by Huang et al. [35] on the TiN and TiAlN coated copper, high speed steel and sintered carbides, the greater is the mismatch between the coating and substrate Young's modulus, i.e. the more E coating /E substrate value differs from 1, the greater is the measured wear. What is more, the authors claim that the E coating /E substrate value affects also the coefficient of friction of the tribological pair observed during wear tests. The WC-Co substrate, which is characterized by the high load-bearing capacity, was characterized by the greatest wear resistance amongst all analysed samples for both TiN and TiAlN coatings [35]. The influence of E coating /E substrate ratio on wear resistance of the coated materials was considered also in the works by Avelar-Batista et al. [36] or Łępicka et al. [37]. According to Avelar-Batista et al. [36], amongst coatings: (Ti,Al)N, TiN and CrN deposited on AISI H13 steel, the greatest wear resistance was achieved by the (Ti,Al)N coating, which was characterized by lowest Young's modulus mismatch with the substrate (E coating /E substrate = 1.6). Though there are many factors associated with the mechanical properties of substrate and coating that affect wear resistance of coated materials, one should always remember that wear performance of the material depends not only on the values that can be measured in nanoindentation tests. If wear of the considered material is affected by chemical-related factors, e.g. interactions between the tested material and air, segregation of wear products or lubrication, the discussed ratios: H/E, H 3 /E 2 and E coating /E substrate should not be considered as the only determinants of wear properties of the coating [37,38]. Some of the coatings, e.g. DLC, undergo also the wear-induced phase transformation [25,37]. Due to that, caution is recommended when planning to interpret wear test results based only on the ratios described in this paper. Adhesion of the anti-wear coatings to the metallic substrates According to Holmberg and Matthews [10], adhesion is the ability of the coating to maintain contact with the substrate under specified operating conditions. Therefore, adhesion is one of the most important attributes of the surface modified systems, as it determines the durability and reliability of the finished products. When coating is chipping or delaminating, it is unable to fulfil its protective function. As a result, substrate becomes exposed to aggressive external factors, e.g. corrosive gases and liquids, abrasive particles or elevated operating temperature [39]. Long-term exposure to harmful agents can result in loss of functional properties of the material, significantly limit service life of the finished product and, in extreme cases, even cause its premature failure. For this reason, there is a justified need to find reliable methods for assessment of adhesion of anti-wear coatings to the metallic substrates. There are four main mechanisms of the coating adhesion [39]: (a) interfacial adhesion, which is oriented around a known and well-defined interfacial plane [40]; (b) diffusion adhesion, in which the coating and the substrate atoms are mixed, forming a diffusion area of a certain thickness [40,41]; (c) adhesion of the intermediate layers, that differ significantly in chemical and mechanical properties, which are used to separate the functional coating from the substrate and to enhance adhesion of the antiwear layer and (d) mechanical adhesion, which results from the surface topography of the substrate and its roughness [40,42]. Furthermore, adhesion of the anti-wear coating to the metallic substrate is influenced by many other factors, e.g. applied surface modification method, thickness of the coating [43,44], machining and other methods of preparation of the substrate's surface prior to deposition [45], interfacial defects and other imperfections [39], Young's modulus mismatch between the substrate and the coating [46] as well as thermal expansion coefficients of the substrate and the coating [47]. There is a number of both destructive [1] and nondestructive methods for measuring the adhesion of hard coatings to the metallic substrates [48]. In practice, however, methods that are based on mechanical properties measurements are the most commonly used [48]. The most common methods for determining the adhesion of antiwear coatings are relatively simple to implement, while the level of approximation of the obtained results is sufficient in practical applications. One of the simplest methods for adhesion evaluation is the pull-off method [1]. With the use of an epoxy glue, metallic rods with diameter of 25 mm are mounted to the surface of the coated materials [40]. When the glue is cured, a normal or shearing force is applied to the outer tip of the rod by the means of a mechanical load system [10]. The value of the adhesion is the load which causes delamination of the coating from the substrate [10]. A significant difficulty in proper application of this type of test is the maximum tensile strength of the used adhesive -for high performance adhesives, typical tensile strength of the obtained bonding does not exceed 100 MPa [1]. Since the original test was designed for the oxide layers and thermally sprayed coatings, which are characterized by relatively low adhesion strength to the substrate, tensile strength of the used adhesives was sufficient [1]. However, in the case of anti-wear coatings obtained by CVD and PVD methods, the adhesion strength of the coating to the substrate is usually higher than the tensile strength of organic adhesives [1]. Therefore, in specific cases it is acceptable to force initiation of the coating cracking process even before starting the pull of test [1]. Since the pull off method is characterized by high dispersion of results, its validity in testing of hard coatings is considered to be disputable [49]. Another popular method for measuring the adhesion of hard coatings to the metallic substrates, the four-point bending test [1,38,39], was introduced in 1989 by Charalambides et al. [50]. The four-point bending test is used in evaluating the fracture toughness of coatings used in medical applications [40]. In this method (Figure 6), the sample is oriented in such a way that the coating cracks due to the applied load and the resulting tensile stress [1]. Usually coating cracking throughout its thickness in a direction perpendicular to the transferred stresses is observed [1]. In order to determine the value of the breaking stress, the test can be repeated several times using stepwise increasing loads, followed by optical or electron microscopic observations of the surface [1]. However, it is a time-consuming method affected by error which is caused by necessity of mounting sample each time in test apparatus and generating increasing tensile forces in order to observe cracking of the coating [1]. Due to that, some systems are equipped not only with an optical microscope, which allows to conduct real-time sample observations, but also with the acoustic emission sensors (Figure 6). By detecting the vibrations and sounds generated by the coating cracking, the acoustic emission sensors are able to detect stress at which failure of the coating starts [1]. Simultaneous usage of two types of sensors, optical and acoustic emission, allows to exclude acoustic emission noises originating from, for example, the deforming supports [1]. The tensile stress at which the first cracks appear is the resistance of the coating to cracking, which corresponds with a sudden increase of intensity of sound emission signal [1]. As tensile stress increases, the coating starts intensively flaking -it is the measure of the adhesion strength of the coating to the substrate [1]. The use of mathematical methods allows also to determine stress at which adhesive or cohesive fracture of the coating starts [1]. The four-point bending test is designated mainly for brittle coatings, which are characterized by fracture toughness significantly lower than the adhesion of the coating to the substrate [1]. In case of the ductile coatings, the plastic deformation can be observed instead of cracking, although it is also possible to observe delamination or peeling of the coating as well [1]. On the other hand, the coatings of exceptional adhesion to the substrate will be characterized by almost perfectly perpendicular cracking to the stress propagation direction as well as by segmentation of the coating between cracks, but no cracking at the interphasal boundary will be observed [1,38]. For this rea-son, numerous modifications of the classic Charalambides et al. [50] method have been proposed, for example, by using a stiffener of the top plane of the hard coating [39], sandwich specimens [39,51], movable platens [39] or introducing notches in the coating prior to the test [39,40]. However, the most popular method for measuring adhesion on the anti-wear coatings [40] undoubtedly is the scratch test, which was introduced in the 1970s for assessing properties of PVD and CVD coatings [1]. In a typical scratch test, a Rockwell diamond indenter with a 0.2 mm tip radius and 120 ∘ cone angle [52] is pulled over the surface of the coated substrate with an increasing normal load [1,51]. As a result, the stylus makes a scratch on a tested material (Figure 7). Deformation of the surface is observed [1] due to superposition of stress generated on the interphasal boundary between the metallic substrate and the ceramic coating as well as in the coating itself. As a result of plastic and elastic deformation of the material, various deformation modes of the coating are observed [40]. A typical scratch test is continued until total failure of the coating and exposure of the substrate is observed [10,39,40]. On the basis of the acoustic emission signals, changes in friction force or microscopic observations the so-called critical loads can be determined [40], which can be attributed, for example, to the appearance of the first cracks in the coating, its chipping or exposure of the substrate [10]. Acoustic emission signals are used mainly to detect first signs of damage induced by applied load, e.g. cracking, tangential force measurements -to determine critical loads for ductile coatings, while microscopic observations are performed to identify deformation mechanisms for all types of the coatings [39]. Critical loads are usually denoted as Lcx, where x is the labelled index, e.g. L c1 -deformation of a cohesive nature. As aforementioned, as a result of an increasing load, the elastic and plastic deformations appear on the sample surface. In the case of hard ceramic coatings, cracking takes place in several stages. During the typical scratch test, three main deformation mechanisms of the coating can be observed [53]: adhesive, cohesive or mixed cracking. Cohesive cracks appear typically as a result of accumulation of the tensile stresses in the front and in the rear of the pulled indenter [54]. When the hard coating of a relatively high Young´s modulus is tested, stresses that are generated in the front of the moving indenter are of compressive nature [55]. However, if the substrate is quite ductile, its plastic deformation in the front of a diamond stylus takes place. As a result of the deformation of the substrate material, bending action of the coating starts. In effect, cohesive, transverse semi-circular cracks appear in the coating. As those cracks usually form in front of a mov-ing diamond tip, they facilitate the process of further deformation of the coating and its buckling [56]. If the coating is strongly buckled, loss of its adhesion to the substrate is soon observed. Adhesive cracks usually arise when the compressive stresses that are generated in the coating due to its scratching reach a certain critical value [53]. The local delamination and chipping of the coating are one of the most important effects of deformation, because those are a visible proof of the coating adhesion loss to the substrate [53]. However, it should be emphasized that the interpretation of the results obtained in the scratch tests is a quite difficult issue due to the possibility of the coexistence of various deformation mechanisms [52]. In the literature it is often stressed that the findings obtained in the scratch tests strongly depend on both the assumed test conditions and the internal parameters of the coating and the substrate. In many cases, the influence of many independent factors on the findings makes it impossible to directly compare results between the examinations conducted in different research groups [55]. It should also be considered that the same coating deposited on different substrates can differ in thickness considerably [55]. Among other things, due to complexity of the factors that affect results of the scratch tests, findings obtained by this technique are only an approximate of the coating adhesion and should be used only for comparison purposes, so that a coating of a "good" or "satisfactory" adhesion to the substrate can be identified [57]. Schwarzer et al. [58] suggest that due to inability to predict results of the scratch tests based on a given load or selected radius of the tip, it is reasonable to conduct further research into selecting scratch tests parameters that will be tailored to the intended applications of the coating. [55,56,58,[60][61][62] Although results obtained in the scratch tests are affected by many factors (Table 1), the mechanical properties of the substrate and the coating are believed to have the greatest influence on the findings of the tests conducted using a Rockwell indenter [56]. As shown in Figure 8, if the soft coatings are deposited on soft substrates, only plastic deformation of the examined material is observed. Typically, there is no cracking or chipping of the coating. However, in the case of hard coatings applied on soft substrates, cracking of the coating in the perpendicular direction to the surface is often observed. If those cracks reach the coating-substrate interface, they may propagate further in this plane. Typical deformation pattern of the hard antiwear coatings that can be observed in the scratch tests is shown in Figure 9. According to Holmberg et al. [59], when the diamond tip is pulled over the surface of the coated substrate with the increasing normal load, the following types of damage are typically observed: (a) angular cracks, (b) parallel cracks, (c) transverse semi-circular cracks, (d) thin film chipping, (e) coating spalling and (f) coating break-through. Currently, there are multiple international standards that describe the methodology of scratch tests applied for various types of coatings, e.g. polymer films or paints [52]. However, the greatest practical significance is observed for tests conducted on hard ceramic coatings according ISO 20502 and ASTM C1624-05 [52]. These standards are used e.g. when evaluating surface-modified materials for medical applications. Selected parameters of scratch tests conducted in the progressive loading mode (PLST -progressive loading scratch test) in accordance with aforementioned standards are given in Table 2. As it can be seen, these standards are characterized by vast variety of applications as they make it possible to conduct research for coatings that significantly differ in thickness -for example, the ASTM standard can be used for coatings with thickness of up to 30 µm [52]. Some of the damage modes determined based on the ISO or ASTM standards [52] are given in Table 3. Scratch tests can be conducted in the following modes: progressive loading (PLST) [52,55,65], constant loading (CLST) [52,65] and multi-pass loading (MPLST) [66][67][68], which has been gaining popularity in the recent years. Nevertheless, methodology of evaluation of the coating adhesion using the scratch test has been described in numerous standards, e.g. American and Japanese ones, as well as is recommended by the other researchers as a universal and versatile method of measuring the adhesion of the thin films [52]. Currently, the PLST method is widely used both in the R&D laboratories, as well as in the surface modification industry [52]. As presented in this chapter, there is a vast variety of adhesion measurement methods applicable for hard antiwear coatings. Nevertheless, the most popular adhesion measurement method is currently the scratch test, which has been described in detail in few international standards. When comparing the obtained results with other authors one should, however, take a measure of caution. There are differences between the authors in determin-ing the critical loads. Though standards state that one or two critical loads should be considered, some of the authors associate critical loads e.g. with the cohesive failure of the coating (first symptoms of cracking), adhesive failure (chipping of the coating) and its breakthrough [37]. Though it provides an in-depth analysis of the scratch channel and the phenomena that occur during scratch testing, particular attention has to be paid by other researchers in order not to confuse the critical loads stated in their own research with the loads recommended in standards. Conclusions As presented in the paper, evaluation of hard anti-wear coatings is a complex process which requires careful consideration and experiment planning. In many cases, assessment of the tribological properties of the coating requires altering the existing configuration of the research equipment to the specific needs of the planned product. Due to that, prior to the time-and cost-consuming wear tests, it is reasonable to use generic measurement approaches. Though the measurement techniques proposed in this paper cannot be implemented to substitute an actual wear test, the list of potential coatings can be reduced substantially after successful application of a scratch test or nanoindentation measurements even at an initial stage of research. It is a matter of high importance to understand the causes behind the specific wear mechanisms of the coatings and therefore predict their durability and reliability in actual applications. Selection of the appropriate measurement methods is necessary not only in terms of establishing the coating properties, but also helps to save time and funding resources. It is also a good practice to use the evaluation methods that have been internationally appraised, e.g. by the international standard organizations. As stated in the paper, there are numerous standards deployed for the specific needs of the hard coatings evaluation. Regarding coating thickness measurement, optical microscopy is widely used for both R&D process purposes and in the scientific work. As far it is a method that is easy to implement, it does not take into consideration porosity of the coating and, depending on the chosen type of microscope, the measurement can be affected by the resolution of the microscope or even the coating material itself. During nanoindentation measurements, one should always remember that it is recommended to measure the coating thickness prior to the hardness and Young's modu-lus tests, as the indent depth should not exceed maximum 10% of the coating thickness. It is also desirable to control the surface roughness, but if it is impossible due to specific material requirements, the number of taken measurements should be appropriately increased. What is more, there are difficulties in comparing obtained results with findings obtained by other authors as the method is sensitive to the applied load. Results from nanoindentation measurements are also highly sensitive to the diamond tip bluntening -therefore, after conducting numerous measurements, special attention should be paid to the indenter condition. Nanoindentation measurements are crucial in the point of view of the wear control. As a result of nanoindentation measurements, values of three ratios can be established: E/H, E 3 /H 2 , as well as E coating /E substrate . According to the numerous findings, it is reasonable to pay attention to hardness and Young's modulus of the developed coating, as well as on the mismatch between its Young's modulus and the elastic modulus of the substrate material. In general, the greater the E/H and E 3 /E 2 values, the better, while the E coating to E substrate ratio has to be controlled with regard to the mechanical properties of the substrate and the coating. For measurement of adhesion of an anti-wear coating, it is recommended to use e.g. the scratch test method. Despite the possible problems with reproducibility of the findings, it is a fast method to distinguish the coating with the highest adhesion in the group of the obtained films. The great advantage of the scratch test method is also the ability to conduct live scratch channel observations along with the acoustic emission measurements. Based on the recognized failure modes of the coating, damage pattern in actual applications can be predicted.
10,169.6
2019-04-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Protective Role of Nrf2 in Zinc Oxide Nanoparticles-Induced Lung Inammation in Female Mice and Sexual Dimorphism in Susceptibility Background Zinc oxide nanoparticles (ZnO-NPs) are used in various products such as rubber, paint, and cosmetics. Our group reported recently that Nrf2 protein provides protection against ZnO-NPs-induced pulmonary inammation in male mice. The present study investigated the effect of Nrf2 deletion on the lung inammatory response in female mice exposed to ZnO-NPs. Twenty-four female Nrf2 −/− mice and the same of female Nrf2 +/+ each into three and each to ZnO-NPs at either or 30 µg/mouse by pharyngeal aspiration. Bronchoalveolar lavage uid (BALF) and were 14 quantify protein of inammatory for inammation histopathologically. The mRNA levels of Nrf2-depedent antioxidant enzymes and proinammatory cytokine in lung tissue were measured. lower susceptibility of females to lung inammation, relative to males, irrespective of Nrf2 deletion, and that enhancement of ZnO-NPs-induced upregulation of HO-1 and TNFα and downregulation of GR by deletion of Nrf2 is specic to female mice. We conclude that Nrf2 provides protection in female mice against increase in BALF eosinophils, probably through down-regulation of proinammatory cytokines/chemokines and upregulation of oxidative stress-related genes. The study also suggests lower susceptibility to lung inammation in female mice relative to their male counterparts and the synergistic effects of sex and exposure to ZnO-NPs on mRNA expression of GR, HO-1 or TNFα. both in vitro and in vivo through different exposure routes. While some studies have described the benecial effects of ZnO-NPs, others have highlighted their toxic effects on different cells and organ systems. Exposure to ZnO-NPs can induce mild but sometimes severe cytotoxicity, inammation, genotoxicity, mutagenicity, neurotoxicity, pulmonary toxicity, cardiac toxicity, test followed by Steel multiple comparison test for neutrophil and eosinophil counts). Simple regression analysis for each genotype and multiple regression analysis in a model with interaction were conducted for the number of total cells, macrophages and lymphocytes, and total protein. Simple ordinal logistic regression analysis for each genotype and multiple ordinal logistic regression analysis in a model with interaction were conducted for neutrophil and eosinophil counts. Since the interaction was statistically insignicant for all of the examined parameters, multiple regression analysis or multiple ordinal logistic regression analysis was conducted in a model without interaction to estimate the separate effects of ZnO-NPs and Nrf2 deletion. 24±10 respectively, while those of male wild type mice were 7.5±3.0 and 1.3±0.3 respectively [18]. Relative expression levels of HO-1 and GR in 0µg ZnO-NP group of HO-1 and GR in 0µg ZnO-NP group of female Nrf2 null mice were 18±8 and 19±9 respectively, while those of male Nrf2 null mice were 5.8±2.5 and 2.1±0.5 respectively [18]. A previous study showed that male mice were more susceptible to acute and chronic pulmonary inammation following single and repeated exposure to nickel nanoparticles than female mice [28]. In contrast, Shvedova et al. reported more severe pulmonary toxicity in female mice exposed to cellulose nanocrystals compared to male mice [29]. Another study showed enhanced susceptibility of female mice to acute and chronic lung inammation induced by multi-walled carbon nanotubes (MWCNTs) [30]. Further studies are needed to explain the above differences in the sex-related susceptibility to toxicity. hepatotoxicity, nephrotoxicity, intestinal toxicity, and reproductive toxicity [3,[7][8][9]. On the other hand, several other studies have focused on the mechanism(s) of ZnO-NPs-induced toxicities and different molecular mechanisms have been proposed. Among them, the generation of reactive oxygen species (ROS) and oxidative stress state, either directly by ZnO-NPs themselves or secondarily through toxic Zn + ions generated from their dissolution, is considered the main mechanism of ZnO-NPs-induced toxicities. The proposed oxidative stress state initiates several deleterious cellular cascades and signaling pathways involved in the resultant toxicity, including the nuclear factor erythroid 2-related factor 2/antioxidant responsive element (Nrf2/ARE) pathway, which is one of the key endogenous antioxidant stress-protective pathway [3,5,8,[10][11][12][13]. The well-documented sex differences in anatomy and physiology can modify the responses to exogenous agents, and susceptibility, pathophysiology, incidence, course, morbidity, and mortality of several diseases across the lifetime and this is highly apparent in the epidemiology of lung diseases [14,15]. Sex was found to be a key susceptibility candidate to engineered nanomaterials (ENMs)induced lung in ammation, and with the expected occupational exposure to the nanoparticles primarily through inhalation, sexrelated differences in the pulmonary responses to engineered nanomaterials have been the focus of attention recently [16]. The nuclear factor erythroid 2-related factor 2 (Nrf2) is a transcription factor involved in the regulation of various cell processes, including the regulation of the adaptive response and resistance to oxidant stress [17]. Our group reported recently that Nrf2 plays an important role in protection against ZnO-NPs-induced pulmonary cytotoxicity through the prevention of neutrophil migration in male mice [18]. However, it is still unknown whether Nrf2 has the same bene cial effects in female mice. The aim of the present study was to determine the effects of Nrf2 deletion on the ZnO-NPs-related pulmonary in ammatory response in female mice. 2.2 Animals.Nrf2 −/− female mice were generated as described by Itoh et al. [21] and backcrossed six times at the Central Institute for Experimental Animals (Kanagawa, Japan) and then further backcrossed seven times at the Division of Experimental Animals, Nagoya University Graduate School of Medicine (Nagoya, Japan). The genotypes of mice were con rmed by PCR ampli cation of genomic DNA isolated from the tail. PCR ampli cation was carried out using three different primers, 5#-TGGACGGGACTATTGAAGGCTG-3# (Nrf2-sense for both genotypes), 3#-GCCGCCTTTTCAGTAGATGGAGG-5# (Nrf2-antisense for wild-type), and 5#-GCGGATTGACCGTAATGGGATAGG-3# (Nrf2-antisense for LacZ). Another 24 pathogen-free age-matched C57BL/6JJcl female mice (Nrf2 +/+ ) weighing 22-27 g were purchased from CLEA Japan Inc. (Tokyo). All mice were housed and acclimatized in a clean environment for 1 week before the start of exposure experiments. Food and water were provided ad libitum. The animal room was light-and temperature-controlled with a 12-h light-dark cycle (lights on at 9 am and off at 9 pm), room temperature of 23-25°C and relative humidity at 57-60%. One day before the start of the experiment, mice of the two genotype groups were weighed and divided at random into three exposure groups (n=8 each); the control (0 µg ZnO-NPs), low-dose (10 µg ZnO-NPs) and high-dose (30 µg ZnO-NPs) groups. The latter two selected exposure doses are equivalent to 0.5 or 1.5 mg/kg body weight. The lower concentration of 0.5 mg/kg is comparable to deposition of 0.48 mg/kg in adult human lung from inhalation to ZnO for one week at the threshold limit value of 2 mg/m 3 (time-weighted average), as proposed by the American Conference of Governmental Industrial Hygienists (ACGIH), based on the values of 500 mL air/breath, 12 breath/min, 40 h/week [22]. The guide of the Japan Government Laws concerning the protection and control of animals, and the guide of animal experimentation of Nagoya University School of Medicine were followed throughout the experiments. The experiment protocol was approved by Nagoya University Animal Experiment Committee. 2.3 Pharyngeal aspiration of ZnO-NPs. Pharyngeal or oropharyngeal aspiration is proved to be an effective convenient alternative to inhalation exposure for the hazard assessment of nanomaterials [23]. For this purpose, the mouse was rst anesthetized by intraperitoneal injection of pentobarbital, then suspended with a rubber band anchored around the upper incisors and placed on its back on an inclined board. ZnO-NP suspensions were vortexed for 10 seconds rst, then the tongue was gently extended outside the oral cavity using blunt forceps, and 40 µl aliquot of the selected concentration was pipetted into the back of the tongue, which was pulled for 1 minute after pipetting then released. With the tongue protruded, the mouse was unable to swallow, and the liquid trickled down slowly into the lungs. Following release of the tongue, the mouse was gently lifted off the board, placed on its left side, and monitored for recovery from anesthesia. 2.4 Bronchoalveolar lavage (BAL), total and differential cell counts. Fourteen days after exposure, the mice were euthanized by intraperitoneal injection of a lethal dose of pentobarbital. The trachea and lungs of each mouse were exposed and bronchoalveolar lavage was conducted. For this purpose, an 18-gauge needle was inserted into the trachea and both lungs were lavaged by 1 ml of 10% PBS (gentle instillation and aspiration). The instillation and aspiration of PBS was repeated 5 times, making a total volume of 5 ml. The amount of recovered bronchoalveolar lavage uid (BALF) was measured and recorded. The average volume of the retrieved uid was >90% of the instilled; the amounts and recovery rates were not different among the three exposure groups. The collected BALF was kept on ice until centrifuged at 1500 rpm for 5 minutes, and the supernatant was aliquoted into three tubes and kept at -80°C until further analysis. The cell pellets were re-suspended in 1 ml of ACK lysis buffer (for red blood cells lysis) and left for 5 minutes at room temperature. Then 10 ml of 10% PBS were added and the whole volume was re-centrifuged at 1500 rpm for 5 minutes. The supernatant was discarded, and the cell pellet was re-suspended in 1 ml 10% PBS and kept on ice for use to determine the total and differential cell counts. Total cell count was determined using a ChemoMetec Nucleocounter (Allerød, Denmark), while differential cell count was performed under optical microscope on slides prepared by cytospin and stained with May-Grunwald-Giemsa (Merck, Darmstadt, Germany). The BALF cell types included macrophages, neutrophils, lymphocytes and eosinophils. The relative differential counts were presented as percentages of total cells counted in 10 elds of each cytospin smear. The absolute differential count was calculated as the product of the number of the total cell count and the proportion of the relative differential count. 2.5 Measurement of total protein in BALF. Total protein in BALF was measured using a Bio-Rad protein assay kit according to the instructions provided by the manufacturer (Bio-Rad Laboratories, Hercules, CA) with bovine serum albumin (BSA) as a standard. 2.6 Histopathological examination of the lung. After completion of BAL, the lungs were removed, washed in saline and the right lung was immediately frozen for further analysis. The left lung was xed in 4% formalin, dehydrated with graded alcohol concentrations, embedded in para n, cut into 3 µm-thick sections, placed on slides, stained with hematoxylin and eosin (H&E) and examined under optical microscope by a pathologist blinded to the exposure. These lung sections were used to determine the degree of lung in ammation. The degree of peribronchial and perivascular in ammation was evaluated on a subjective scale of 0-3, as described previously [24][25][26][27]. A score of 0 represented no detectable in ammation, while score of 1 represented occasional cu ng with in ammatory cells. For score 2, most bronchi or vessels were surrounded by a thin layer (1-5 cells thick) of in ammatory cells. For score 3, most bronchi or vessels were surrounded by a thick layer (>5 cells thick) of in ammatory cells. Total lung in ammation was de ned as the average of the peribronchial and perivascular in ammation scores. Four lung sections per mouse were scored and the in ammation score was expressed as the average value. Tissue slides were examined under an optical microscope (model DM750, Leica Microsystem, Wetzlar, Germany) and images were captured with Leica Application Suite V3 software. 2.7 Quanti cation of total glutathione and oxidized glutathione. The frozen lung tissue samples were homogenized with 5 volumes (w/v) of cold 50mM MES buffer (pH 6.01) containing 1mM EDTA. The protein in each sample was denatured with equal volume of 0.1% metaphosphoric acid (Sigma-Aldrich) and mixed on a vortex mixer. The mixture was allowed to stand at room temperature for 5 min and then centrifuged at 2000 x g for 3 min. The supernatant (95 µl) was kept at -20ºC until used for determination of total glutathione and oxidized glutathione (GSSG). First, 90 µl of supernatant was treated with 4.5 µl of 4M triethanolamine (Sigma-Aldrich) solution and vortexed well before assay. For the analysis of total reduced form of glutathione (GSH), 30 µl TEAM-treated sample was diluted 20-fold with MES buffer (pH 6.0) containing 2mM EDTA. An aliquot (50 µl) of the diluted solution was treated with 150 µl freshly prepared assay cocktail and assayed at 405 nm with a microplate reader (Gen5™ & Gen5 Secure, BioTek® Instruments, Inc.). For GSSG determination, 30 µl of TEAM-treated sample was diluted 10 times with MES buffer before derivatization with 2-vinylpyridine., Two µl of 1M 2-vinylpyridine was added to 200 µl of diluted solution of every sample or GSSG standard in tube, and then the tubes were mixed on a vortex mixer and incubated for 1 h at room temperature. Total GSH and GSSG concentrations were calculated from a standard curve using GSSG (Cayman; 703014) prepared according to the GSH assay kit (Cayman Chemical Company, Ann Arbor, MI; #703002), and normalized versus protein concentration. Total GSH and GSSG were expressed in micromoles of GSH (or GSSG) per milligram of protein. 2.8 Malondialdehyde assay. The malondialdehyde (MDA) assay (Life Science Specialties, LLC; NWK-MDA01) was performed according to the protocol supplied by the manufacturer. A 10% wt/vol homogenate was prepared from lung tissue in cold Assay Buffer (Phosphate buffer, pH 7.0 with EDTA). Absorbance was read at 532 nm using a PowerScan4 microplate reader (DS Pharma Medical Co., Japan) after reaction of the sample with thiobarbituric acid (TBA). Samples were analyzed in duplicate, and MDA level was expressed in micromoles of MDA per milligram of protein. 2.10 Statistical analysis. Data were expressed as mean ± standard deviation. Differences between the control and exposure groups were examined using Dunnett`s multiple comparison method following one-way ANOVA or Steel multiple comparison method following Kruskal Wallis nonparametric test in each genotype. To test a trend with level of exposure to ZnO-NPs, simple regression analysis or simple ordinal logistic regression analysis on the exposure level of ZnO nanoparticles was applied in each genotype separately. Multiple regression analysis or multiple ordinal logistic regression analysis using dummy variables for genotype in full model was applied to examine effect of interaction between genotype and exposure level. When the interaction between genotype and exposure level was not signi cant, multiple regression analysis on exposure level and genotype in a non-interaction model was applied to test the effects of exposure level and genotype. Statistical analysis was performed using the JMP software version 16 (SAS Institute, Cary, NC) and probability (p) value <0.05 was considered statistically signi cant. Changes in body and lung weight There was no signi cant difference in body weight and lung weight between ZnO-NPs exposure groups and the control, both for female wild-type mice and female Nrf2-null mice. The percentage of lung weight to body weight (relative lung weight) was signi cantly different between the ZnO-NPs exposure groups and the control only in wild-type mice (p=0.043, ANOVA), but not in Nrf2-null mice. However, post-ANOVA Dunnett's multiple comparison test did not show signi cant difference between the exposure groups and the control. Simple regression analysis showed signi cant trend with ZnO-NPs exposure level in wild-type mice (Table 1). Multiple regression analysis showed no signi cant interaction between ZnO-NPs exposure level and Nrf2 deletion in body weight, lung weight and relative lung weight. However, multiple regression analysis without interaction showed a signi cant negative effect for Nrf2 on body and lung weights, as well as a positive effect for ZnO-NPs exposure on absolute lung weight and relative lung weight. Changes in BALF cell count and total protein Aspiration of ZnO-NPs induced signi cant changes in the absolute numbers of total and individual in ammatory cells in both the wild-type and Nrf2-null mice (ANOVA), with the exception of eosinophils in Nrf2-null mice, with signi cant changes at 10 and 30 mg of ZnO-NPs exposure compared to the non-exposed control group. Exposure to ZnO-NPs dose-dependently increased total cells, macrophages, lymphocytes, neutrophils and eosinophils in BALF in both the wild-type and Nrf2-null mice (simple regression analysis and simple ordinal logistic regression analysis, Table 2). Multiple regression analysis or multiple ordinal regression analysis did not show signi cant interaction between ZnO-NPs exposure level and Nrf2 deletion, and the analysis without interaction showed signi cant harmful effect for ZnO-NPs exposure level on total cells and all types of cells in BALF, and signi cant harmful effect for Nrf2 deletion only on eosinophils in BALF. Further simple regression analysis showed that exposure to ZnO-NPs dose-dependently decreased total protein only in wild-type mice, and multiple regression analysis without interaction showed signi cant harmful effect for ZnO-NPs exposure level on total protein. 3.3 Changes in total in ammation score, peribronchial in ammation score and perivascular in ammation score Exposure to ZnO-NPs increased signi cantly all of the examined scores at 30mg in both the wild-type mice and Nrf2-null mice (Kruskal Wallis nonparametric test followed by Steel multiple comparison test), and simple ordinal logistic regression analysis con rmed the signi cant trend with ZnO-NPs exposure level in both the wild-type and Nrf2-null mice (Table 3). Multiple ordinal logistic regression analysis with the full model did not show signi cant interactions between ZnO-NPs exposure level and Nrf2 deletion for all of the examined scores. Multiple ordinal logistic regression analysis without interaction showed signi cant effects of ZnO-NPs exposure level but no signi cant effect of Nrf2 deletion for all of the examined scores. Changes in glutathione and malondialdehyde expression levels in lung tissue The GSSG/GSH ratio was signi cantly affected by the ZnO-NPs exposure level in wild-type mice (p=0.049, ANOVA), but the differences between the two exposed groups and the control were statistically insigni cant (Dunnett's multiple comparison test). Simple regression analysis showed that ZnO-NPs exposure level had a signi cant effect on glutathione disul de in wild-type mice (Table 4). No signi cant interactions between ZnO-NPs exposure level and Nrf2 deletion were noted in total glutathione, glutathione disul de, GSSG/GSH rate and MDA (multiple regression analysis). On the other hand, Nrf2 deletion modulated signi cantly the levels of total glutathione, whereas ZnO-NPs exposure signi cantly altered glutathione disul de expression level. Table 4 Total glutathione (GSH), oxidized glutathione (GSSG), GSSG/total GSH ratio and malondialdehyde (MDA) in the lung of female mice at 14 days after exposure to zinc oxide nanoparticles by pharyngeal aspiration. 3.5 Changes in expression levels of lung oxidative stress-related genes ( Table 5) There were no signi cant ZnO-NPs dose-related differences in the gene expression levels in Nrf2-null mice with the exception of GR (p<0.0001, ANOVA) and HO-1 (p=0.0036, ANOVA). Further analysis with the Dunnett's multiple comparison test showed that exposure to the two doses of ZnO-NPs was associated with signi cant decreases in GR and signi cant increases in HO-1 in Nrf2-null mice. Furthermore, simple regression analyses con rmed the signi cant trends of GR and HO-1 with exposure level (Table 5). Multiple regression analysis showed signi cant interaction of ZnO-NPs exposure level with GR and HO-1, but not with other genes. Nrf2 deletion was associated with signi cant overexpression of GcLm and MT-2 and under-expression of CAT, GcLc and NQO1. 3.6 Changes in expression levels of lung proin ammatory cytokines and brosis-related proteins At 30 µg exposure level, ZnO-NPs signi cantly increased the expression levels of KC, MIP-2, IL-6, IL-1β, MCP-1 and MMP2 (ANOVA followed by Dunnett's test) and these changes were con rmed by simple regression analysis in wild-type mice (Table 6). Multiple regression analysis showed no signi cant interaction of ZnO-NPs exposure level with Nrf2 deletion for all the tested cytokines and MMP2. Non-interaction multiple regression analysis showed ZnO-NPs exposure signi cantly altered the expression levels of MMP2 whereas Nrf2 deletion signi cantly affected the levels of KC, MIP-2, IL-6, IL-1β, MCP-1 and TNFα. The present study showed that Nrf2 deletion additively enhanced the effects of exposure to ZnO-NPs on the number of eosinophils in BALF of female mice. This is accompanied by enhancement of ZnO-NPs-induced upregulation of HO-1 and TNFα and ZnO-NPsinduced down-regulation of GR by Nrf2 deletion, in addition to reduction of total glutathione, downregulation of CAT, GcLc and NQO1 mRNA levels and upregulation of KC, MIP-2, IL-6, IL-1β and MCP-1 mRNA levels by Nrf2 deletion. The results indicated that in female mice, Nrf2 inhibits in ltration of eosinophils in the lung, at least in part through upregulation of anti-oxidative stress genes and downregulation of proin ammatory cytokines or chemokines. reported more severe pulmonary toxicity in female mice exposed to cellulose nanocrystals compared to male mice [29]. Another study showed enhanced susceptibility of female mice to acute and chronic lung in ammation induced by multi-walled carbon nanotubes (MWCNTs) [30]. Further studies are needed to explain the above differences in the sex-related susceptibility to toxicity. Nrf2 deletion enhanced ZnO-NPs-induced upregulation of HO-1 and TNFα and downregulation of GR in female mice, but not in male mice [18]. It is known that HO-1 and GR expression shows sexual dimorphisms and is regulated by estradiol [31][32][33][34][35]. Trauma and hemorrhage are reported to induce a 2-fold increase in hepatic HO-1 expression in proestrus female rats compared to male rats [34]. Treatment with 17β-estradiol upregulate the expression of HO-1 in the liver of mice [33]. The ndings of another in vivo study suggested that increased HO activity and expression in female rats compared to male rats can explain the sexual dimorphism of cardiovascular ischemia during reproductive age [31]. Finally, higher activity of GR from 4 to 24 weeks of age was described in the liver of male rats compared to female rats [32]. The mechanism for the sexual dimorphism in expression of HO-1 or GR remains elusive. In vitro studies have shown that 17β-estradiol can upregulate Nrf2 in nuclear extracts and increase the expression of HO-1, SOD, GST and GCL in hypoxia/reoxygenation model of primary myocardial cells [35] and 17β-estradiol increases Nrf2 activity through activation of the PI3K pathway in MCF-1 breast cancer cells [36]. However, these studies on Nrf2 activation by estradiol cannot explain our results that ZnO-NPs-induced upregulation of HO-1 is enhanced in Nrf2 null mice. Although HO-1 plays an adverse role in carcinogenesis and neurodegenerative disease, it is known to play a protective role against oxidative injury and other stress conditions [37]. We believe that ZnO-NPs-induced upregulation of HO-1 in female mice only is involved with the observed low susceptibility of female mice to peribronchial in ammation compared to male mice, though further studies are needed to test this hypothesis. Nrf2 deletion signi cantly increased eosinophil in BALF, while did not signi cantly increase the lung in ammation score in female mice. This is in agreement with the previous study on male Nrf2 null mice and wild type mice, suggesting limitation of semiquanti cation in in ammation scoring [18]. In conclusion, our study demonstrated the protective role of Nrf2 against ZnO-NPs-induced in ltration of eosinophils in lung of female mice, which might be explained by negative regulation of proin ammatory cytokines and chemokines and positive regulation of oxidative stress-related genes by Nrf2. The results also suggested lower susceptibility to lung in ammation in female mice compared with male mice and the synergistic effect of sex and ZnO-NPs exposure on GR, HO-1 or TNFα mRNA expression, although further studies are needed to de ne the relationship between sex-related susceptibility and gene expression.
5,370.4
2022-01-01T00:00:00.000
[ "Biology" ]
Evaluating the Stability of Numerical Schemes for Fluid Solvers in Game Technology A variety of numerical techniques have been explored to solve the shallow water equations in real-time water simulations for computer graphics applications. However, determining the stability of a numerical algorithm is a complex and involved task when a coupled set of nonlinear partial di ff erential equations need to be solved. This paper proposes a novel and simple technique to compare the relative empirical stability of fi nite di ff erence (or any grid-based scheme) algorithms by solving the inviscid Burgers ’ equation to analyse their respective breaking times. To exemplify the method to evaluate numerical stability, a range of fi nite di ff erence schemes is considered. The technique is e ff ective at evaluating the relative stability of the considered schemes and demonstrates that the conservative schemes have superior stability. Introduction Fluid dynamics is widely used in computer game technology to simulate a range of phenomena including water, smoke, and soft bodies. A variety of numerical techniques have been explored for simulating fluids in real-time such as positionbased dynamics (e.g., [1,2]), the finite-volume method (e.g., [3][4][5][6]), and the finite-element method [7]. There are two broad modelling approaches to describing fluids: gridbased Eulerian or particle-based Lagrangian methods [8,9]. The Lagrangian approach follows an individual fluid parcel as it moves through space and time, whereas the Eulerian approach describes the fluid motion from specific fixed points in space as time passes. Both approaches have enjoyed widespread use in game technology. Particlebased approaches are advantageous when considering arbitrary boundaries, fluid mixing [10], and interactions with rigid bodies [11]. Grid-based approaches are widely used in computer graphics applications and can attain higher numerical accuracy since dealing with spatial derivatives is easier to accommodate on a fixed grid. However, in contrast to particle-based methods, grid-based approaches have difficulty ensuring the conservation of mass and can be computationally slower. Furthermore, in real-time water simulations in games, grid-based methods perform much better at tracking smooth water surfaces [9]. Smooth Particle Hydrodynamics (SPH) is a popular particle-based method for simulating fluids in game technology, computer animation, virtual reality, and the movie industry. For example, incompressible SPH is a promising numerical scheme for large-scale and large-deformation simulations used in interactive fluid flow simulations [12]. Two of the challenges faced by SPH is unstable solid boundary handling and numerical dissipation, both of which inhibit stable and realistic fluid evolution. Using a positionbased dynamics framework integrated into the SPH solver [2,13], these issues can be overcome [14]. SPH is also useful for simulating viscoelastic materials such as gels, gelatin, and mucus by applying a velocity correction to limit the fluid deformation, making it attractive to soft-body simulation as well. This produces visually accurate results but is offset by computational performance costs; however, simulations can be accelerated using GPUs [15]. Recently, neural networks have been used to accelerate particlebased fluid simulations that run significantly faster than a GPU Position-Based Fluid Solver whilst preserving visual quality [16]. The Lattice-Boltzmann method (LBM) is a grid-based method that indirectly solves the fluid dynamical equations by solving the Boltzmann equation that describes the underlying particle distribution function [17][18][19]. The Boltzmann equation is easier to numerically solve in contrast to the classical fluid equations, and LBM can be run efficiently on massively parallel architectures. LBM is useful for solving complex, coupled, and multiphase flows and can accommodate complex boundaries with relative ease [17]. However, tracking and preserving small-scale features, such as a fluid drop or splash, is a challenging problem. Recent work has looked at a novel grid-particle method for reconstructing distribution functions of interface grids and coupling the reconstruction method with the LBM and Volume of Fluid method to track the free surface [20]. The method enhances the accuracy of the reconstruction and helps preserve the fluid surface detail. The shallow water equations (SWEs) are a simplified set of fluid flow equations that are widely used in computer graphics applications. The SWEs are well suited to game technology applications due a number of practical reasons: in contrast to the full 3D Navier-Stokes equations, the relative simplicity of the SWEs lead to a significant performance advantage; the solutions produce a full velocity field which is useful for providing plausible interactivity and solid-fluid coupling, and the rendering of the simulation results can be undertaken with common, fast rendering techniques [8,21]. In contrast, full 3D simulations require surface tracking, volumetric rendering techniques, or mesh generation algorithms for visualisation. Although the SWEs can describe complex nonlinear fluid phenomena such as vortices, it is unable to handle breaking waves or splashing phenomena [8,21]. A variety of numerical methods exist for solving the SWEs. Finite difference (FD) techniques are particularly popular in real-time applications due to the relative simplicity of their implementation and their ability to capture complex phenomena such as shocks. In particular, a range of finite difference techniques has been applied to the conservative form of the shallow water equations including the Lax-Friedrichs (LF) scheme, the Richtmyer two-step Lax-Wendroff (LW) method, and the McCormack (MC) method [22,23]. Variations of these popular algorithms have also been investigated such as the corrected LF (CF); composite schemes, e.g., LWLF, MCLF, and CFLC [24,25]; filtered LW, MC, and CF schemes; and the Picard Integral Formulation of the Weighted Essentially Non-Oscillatory (WENO) scheme [26]. Many of the numerical schemes, such as the LW, MC, and CF methods, suffer from oscillatory phenomenon (high-frequency jitter) as a result of instability. As a consequence, attempts are made to mitigate instabilities from occurring. The filtered schemes tackle the stability issues by using an oscillation smoothing method, for example, Liang et al. [27] and Hsieh et al. [28] for stabilizing their tsunami simulations using the MC scheme. Similarly, Ransom and Younis [29] used a total-variation diminishing limiter with the MC method to help avoid oscillatory behaviour, hence stabilizing the scheme. Recent studies have proposed a generalized finite difference-split coefficient matrix method along with the flux limiter technique to eliminate potential numerical oscillations that could lead to instability [30]. Li et al. presented a fifth-order weighted essentially nonoscillatory scheme for simulating dam break flows in a finite difference framework [31] that produces smaller truncation errors and provides the same accuracy order and stability as contemporary WENO schemes. To accommodate more complex systems, such as flat-bottom geometry, a new discretization of the source term of the SWEs is suggested by Prieto-Arranz et al. [32]. In the approach, a Smooth Particle Arbitrary Lagrangian-Eulerian formulation based on Riemann solvers is used to solve the SWEs where stability is achieved using a posteriori MOOD paradigm. Benchmarking demonstrates that the MOOD limiting procedure is able to prevent artificial oscillations occurring in the neighbourhood of discontinuities and shocks. Wu et al. presented a high-order entropy stable discontinuous Galerkin method for solving the SWEs on curved triangular meshes that preserves a semidiscrete entropy inequality and remains well-balanced for continuous bathymetry profiles [33]. Such an approach is advantageous as it can accommodate complex geometries through unstructured meshes; it is simple to parallelize and is able to take advantage of acceleration techniques using GPUs. Determining the stability of a numerical algorithm is a complex task. For linear cases, the von Neumann stability analysis [22] can be applied analytically but is intractable in nontrivial scenarios including complex nonlinear problems, where a set of coupled partial differential equations have to be solved. In nontrivial scenarios, elaborate numerical tests can be used (such as circular dam breaks and flows around a bump) to give an in-depth characterisation of the numerical algorithms performance (e.g., see Parna et al. [26]), but this approach can be time-consuming, complex to set up, and difficult to analyse. This paper proposes a novel and simple technique to compare the relative empirical stability of finite difference algorithms (or any grid-based scheme) by solving the inviscid Burgers' equation to analyse their respective breaking times. A similar technique has been used to determine suitable plasma fluid solvers in astrophysics [34]. The proposed technique provides a quick and easy way of determining the stability of a proposed algorithm prior to more thorough stability tests tailored to the specific system to be solved and its application. The proposed method is in keeping with the method of A-and L-stability where numerical methods for stiff problems involving ordinary differential equations are analysed by applying them to the test equation y′ = ky for yð0Þ = 1, k ∈ ℂ. In the fluid context, the inviscid Burgers' equation emulates the form of partial differential equations (PDEs) in which there is an advective term such as in Euler's equations of fluid dynamics and the SWEs. International Journal of Computer Games Technology The inviscid Burgers' equation (or the nonlinear wave equation) is given by It describes a nonlinear wave propagating in the positive x-direction with a speed proportional to its amplitude, u. Therefore, larger wave amplitudes propagate faster than smaller components: this causes the wave to break due to nonlinear steepening, like the phenomenon of breaker ocean waves, yielding an advective instability. Obtaining the characteristics for the first-order PDE reveals a solution that may become triple-valued after a period of time-this is wave breaking. The onset of wave breaking is characterised by the solution having a vertical tangent at the leading wave edge. The timescale for this to first occur is the breaking time. In the numerical context, multivalued solutions are not possible and the numerical schemes become unstable. The ensuing nonlinear instability manifests itself as a series of oscillating spikes that form in the wake of the leading wave edge. The relative empirical stability of two numerical methods can be assessed by analysing their respective breaking times. Due to the nature of the instability, where a discontinuity forms due the solution becoming triplevalued, the method also gives an indication of how well numerical schemes deal with discontinuities such as shocks and boundary interactions. To exemplify the method to evaluate numerical stability, a range of FD schemes will be considered. This paper is structured as follows. The Methodology introduces and derives the FD algorithms that will be used to exemplify the stability analysis technique. It outlines the initial and boundary conditions for the stability evaluation and calculates the corresponding analytical solution, including the breaking time and location, for the system. The Methodology concludes by defining a quantitative metric to evaluate the numerical stability of the numerical schemes. Following this, the Results and Discussion evaluates the numerical stability of the FD algorithms using the quantitative metric. In this paper, a comparison is made of the relative stability of FD numerical methods, but other grid-based numerical schemes could be similarly analysed. Methodology The finite difference methods are popular numerical schemes for solving partial differential equations (PDEs). The utility and efficacy of these numerical methods depend upon a number of criteria including stability, accuracy, and ease of implementation. The LW method transforms a continuous problem into a discrete one by replacing spatial and temporal derivatives with second-order accurate finite difference approximations, thus reducing the problem to an iterative algebraic exercise [22]. The LW method is reliable, stable, and accurate. However, depending on the complexity of the individual PDE or if the system is governed by a coupled set of PDEs, implementing LW can be very complicated. Finite Difference Time Domain (FDTD) is an alternative numerical method [35][36][37]. It entails recasting the PDE as a series of ordinary differential equations (ODEs) with respect to time by discretizing the spatial domain and replacing the spatial derivatives with finite difference approximations. The ODEs are then solved numerically via an ODE solver such as the Runge-Kutta or leap-frog methods. Its main advantage lies in its ease of implementation. To obtain a finite difference solution of (1), the domain described by the PDE is defined in terms of a rectilinear grid, its edges parallel to the x and t axes. We define discrete coordinates so that an arbitrary grid point is given by ðx, tÞ = ðmh, nkÞ such that m ∈ ½0, m max and n ∈ ½0, n max , where m, n are integers and h, k are the grid spacings in the x, t coordinates, respectively. Writing uðnk, mhÞ = u n m , the finite difference formulae are obtained from the Taylor expansion of u about t in the neighbourhood of k, while holding x fixed [22], This is second-order accurate in time. The LW method contains an inherent diffusion term, ∂ 2 t , that helps dampen the instability. For the case of the linear wave equation, ∂ t u n m is replaced by −a∂ x u n m and ∂ 2 t u n m by a 2 ∂ 2 x u n m , where a is a constant wave speed. Therefore, Using the finite difference approximations: This yields the Lax-Wendroff algorithm for the linear wave equation, where p = k/h is the mesh ratio. This algorithm is secondorder accurate in space and time. One of the advantages of the finite difference method is that it is very easily extended to the solution of nonlinear equations since in all but special cases many of the methods and proofs are invariant [22]. International Journal of Computer Games Technology For the nonlinear equation (1), the finite difference formula (3) is used where ∂ t u n m is replaced by −u n m ∂ x u n m , using (1), and ∂ 2 t u n m is replaced by the following, Substituting into the finite difference expression (3) yields The Lax-Wendroff algorithm for the nonlinear wave equation becomes where the finite difference approximations (5) and (6) have been used. Numerical algorithms for solving partial differential equations are only useful if they are convergent and stable. A finite difference algorithm is deemed convergent if the difference between the theoretical solutions of the differential and difference equations at a fixed coordinate ðx, tÞ tends to zero when the number of grid points in a fixed size numerical domain is increased: h, k ⟶ 0 and m, n ⟶ ∞. An algorithm is considered stable when the difference between the theoretical and numerical solutions of the difference equation remains bounded as n tends to infinity. The von Neumann stability analysis for the linear wave equation algorithm states that the scheme is stable provided 0 < pjaj ≤ 1, which appropriately coincides with the Courant-Friedrichs-Lewy (C.F.L) condition for the convergence of the algorithm. This result is used as a guide for the stability and convergence of the nonlinear expression; ergo, 0 < pju n m j ≤ 1 [22]. An alternative numerical algorithm is possible if the second righthand term of (10) is differenced directly: substituting (10) into (3) gives where This yields an alternative algorithm Note that this version uses next nearest neighbours. Additional finite difference algorithms can be derived by considering the conservative form of the nonlinear wave equation, Following the previous derivation and using (3), the ∂ 2 t u n m term is replaced with Substituting (19) into (3) Using (18) in (3) instead of (19) yields International Journal of Computer Games Technology Differencing the derivative in the final term directly produces an alternative conservative algorithm: This is a first-order ODE with respect to time where h is the distance between successive spatial grid points. Solving using a 4th-order Runge-Kutta method gives where Δt is the time step, N is the number of time steps, and T is the integration time such that T = NΔt. By considering the conservative form of the nonlinear wave equation, another FDTD algorithm can be derived. Substituting for the central difference formula yields The corresponding Runge-Kutta algorithm is, therefore, Both the FDTD algorithms described here are secondorder accurate in space and fourth-order accurate in time. For stability, the algorithms require that vΔt ≤ h, where v is the maximum expected phase velocity. This ensures that the solution cannot vary significantly over one spatial increment during one temporal step [36]. Initial and Boundary Conditions. To compare the FD algorithms, they were used to solve the nonlinear wave equation, the inviscid Burgers' equation, For the initial conditions, the wave amplitude was perturbed from a uniform equilibrium in the shape of a Gaussian waveform ∀m, where A 0 is the amplitude of the perturbation, ς defines its width, and m 0 is the centre of the computational domain (see n = 0 plot of Figure 1). Each algorithm was computed for a prescribed length of time T = n max k = NΔt for the specified initial condition. So that the LW and FDTD methods could be compared, the values of p = k/h and Δt had to be chosen carefully. Setting N = n max requires that Δt = k implying p = k/h = Δt/h. For the simulations, the boundary conditions were u n 0 = u n m max = 0 for all n. For the simulations discussed here, the parameters listed in Table 1 were used, without loss of generality. Analytical Considerations of the Inviscid Burgers' Equation. To evaluate the relative empirical stability of the FD algorithms, it is instructive to evaluate the breaking time and position analytically for the prescribed initial and boundary conditions. The inviscid Burgers' equation is given by Performing the change of variables x = mh and t = nk yields where p = k/h. For the initial conditions, a Gaussian wave packet is prescribed: International Journal of Computer Games Technology A solution to this initial value problem can be sought using the method of characteristics. The characteristics are where σ is a constant on a given characteristic. The implicit solution for u is One can rewrite the solution as an explicit function for m, A plot of u versus m for various n evaluated using (41) is plotted in Figure 2. The solution shows the wave amplitude as a function of position for various values of n up to, including and beyond, the breaking time. The solution (Equation (40)) may become triple-valued after a period of time-this is wave breaking. The onset of wave breaking is characterised by the solution having a vertical tangent at the leading wave edge. The breaking time is the time at which this first occurs. Recall the characteristics, where FðσÞ = ffiffi ς p pA 0 e −σ 2 . Defining GðσÞ = −ð1/F ′ðσÞÞ, one can find a particular value of σ = σ b such that G ′ ðσ b Þ = 0. Using σ b , the breaking time is simply (31). Plots display spatial structure, m, for time steps n = 0 (initial conditions), 250, 500, 750, and 1000 exhibiting the evolution of the nonlinear wave and the advective instability. The nonlinear wave has a propagation speed proportional to its amplitude; therefore, the crest of the perturbed wave structure moves faster than its base. As a result, the crest of the wave tries to overtake the smaller amplitudes requiring a multivalued solution, forbidden in this numerical framework resulting in instability. The FDTD solution plotted here exhibits the qualitative behaviour of the numerical methods used in this paper. Therefore, the breaking location For the particular problem described in this manuscript, parameterized by the values in Table 1, the breaking time and location are n b = 659:5 and m b = 70, respectively (illustrated by the solid curve in Figure 2). In comparison, for the numerical solutions, the wave will break earlier (n < n b ) and at a premature location ðm < m b Þ, which is to be expected since the numerical grid resolution will be defeated long before the leading wave edge approaches an infinite gradient. Evaluating the Numerical Stability. To compare and quantify the development of the instability, we define the fractional change of the numerical solution relative to the analytical solution, where u 0 = uðn = n bs , m = m bs Þ is the analytical solution (Equation (40)) evaluated at n = n bs and m = m bs , the breaking time and breaking location of algorithm s, respectively, and u n bs m bs is the numerical solution of algorithm s, evaluated at n = n bs and m = m bs . The earlier the instability evolves for a particular algorithm, the smaller the value of n bs and the poorer the algorithm performs. The bigger the value of δ, the poorer the stability of the algorithm. Note that the definition of δ is consistent with the quantity, Z n m , defined by Mitchell and Griffiths [22], to determine the stability of a finite difference numerical algorithm. To determine the breaking time, n bs , and breaking location, m bs , for algorithm s, for each time step, n ∈ ½0, n max , the maximum value of the wave amplitude, ðu n m Þ max , is determined for m ∈ ½0, m max . Using this, the absolute magnitude of the difference between the maximum amplitude and the amplitude of the initial perturbation, A 0 , is calculated: jA 0 − ðu n m Þ max j. In the first instance that this quantity is greater than or equal to a threshold value, α 0 , then the corresponding values of m and n are the breaking location, m bs , and breaking time, n bs , of algorithm s, respectively. In this paper, the threshold value, α 0 , is set to be 1% of the amplitude of the initial perturbation, α 0 = 0:01A 0 = 5 × 10 −3 . Results and Discussion In the discussion that follows, LW1, LW2, LW3, LW4, FDTD1, and FDTD2 denote numerical solutions of (1) obtained from the formulae (13), (16), (21), (24), (31), and (33), respectively. Figure 1 shows the evolution of the nonlinear wave equation solved using FDTD1. The plot encapsulates qualitatively the typical behaviour of the nonlinear wave equation as a function of position and time. The initial, perturbed wave-form propagates in the positive x-direction with a speed proportional to its amplitude. Therefore, small amplitude wave components propagate slower than their larger counterparts, leading to wave steepening. In general, 7 International Journal of Computer Games Technology as fast moving wave elements overtake those moving slower, the wave solution becomes multivalued for a single value of x. Within the numerical context discussed here, multivalued solutions are not possible and the numerical schemes become unstable. The ensuing nonlinear instability manifests itself as a series of oscillating spikes that form in the wake of the leading wave edge. As the leading wave edge tries to overtake further wave elements, more spikes are formed in its wake. Figure 3 exhibits the initial development of the advective instability using FDTD1 and FDTD2 as example cases. The plot shows a close-up of the wave crest for time steps n = 510 (cross), n = 530 (plus sign), and n = 555 (point), illustrating the formation of the initial spike that disrupts the solution. To quantify the development of the instability, we calculate the fractional change of the numerical solution relative to the analytical solution, δ. For the algorithms LW1, LW2, LW3, LW4, FDTD1, and FDTD2 considered here, δ ≈ 0:0413, 0.0412, 0.0273, 0.0279, 0.0473, and 0.017, respectively (see Table 2), demonstrating that in order of stability from best to worst, we have FDTD2, LW3, LW4, LW2, LW1, and FDTD1. The nonconservative algorithms appear to have inferior stability in comparison to the conservative algorithms. Of the nonconservative algorithms, the FDTD1 solution has the least stability, while the two nonconservative LW methods (LW1 and LW2) are virtually identical. Of the conservative algorithms, the FDTD2 solution has the most stability, and very little separates the two LW solutions (LW3 and LW4). Although the FDTD2 algorithm appears to break before the LW algorithms (as indicated by its n bs value), its comparison to the analytical solution for the same position and time indicates that it ultimately has superior stability. Note that δ can also be viewed as a measure of accuracy, since instability leads to inaccuracy and so is a dependent concept. Conclusion This paper proposes a novel and simple technique to compare the relative empirical stability of finite difference algorithms (or any grid-based scheme) by solving the inviscid Burgers' equation to analyse their respective breaking times. The proposed technique provides a quick and easy way of determining the stability of a proposed algorithm prior to Figure 3: (a) Solution of (1) obtained from the FDTD algorithm (FDTD1: Equation (31)), displaying the spatial structure, m ∈ ½55, 70, for time steps n = 510 (cross), n = 530 (plus sign), and n = 555 (point) exhibiting the evolution of the nonlinear wave and the advective instability. (b) Solution of (1) obtained from the conservative FDTD algorithm (FDTD2: Equation (33)), displaying the spatial structure for the same time steps. The plots illustrate the formation of the initial spike, as a result of the instability that disrupts the onward solution. FDTD1 and FDTD2 span the range of δ calculated in this paper. International Journal of Computer Games Technology more thorough stability tests tailored to the specific system to be solved. The relative empirical stability of two numerical methods can be assessed by analysing their respective breaking times. Due to the nature of the instability, where a discontinuity forms due the solution becoming triplevalued, the method also gives an indication of how well numerical schemes deal with discontinuities such as shocks and boundary interactions. The proposed method is analogous to the A-stability method but tailored to PDEs in which the advective derivative is key. The technique is effective at determining the relative stability of grid-based numerical algorithms. It demonstrates that the conservative schemes behave very similarly to one another as do the nonconservative schemes and that the stability of the conservative algorithms is marginally better than the nonconservative algorithms. In order of most to least stable, we have the conservative FDTD algorithm (FDTD2), the conservative Lax-Wendroff algorithms (LW3 then LW4), the nonconservative Lax-Wendroff algorithms (LW2 then LW1), and the nonconservative FDTD algorithm (FDTD1). The LW algorithms contain an inherent diffusive term that can help to dampen the instability, quenching the wave steepening for a while. In contrast, the FDTD algorithms lack a diffusive term to help counteract the inevitable instability. In spite of this, the FDTD algorithms do well to sustain coherent behaviour. In principle, one could add a stabilizing diffusive term: This is Burgers' equation where ν is the viscosity coefficient. If the diffusive term quenches the wave steepening, a stable solitary wave structure persists. Alternatively, the time-fractional inviscid Burgers' equation is worth investigating: the inclusion of a fractional time derivative may stave off the breaking time for longer, potentially improving stability [38]. Fractional-order differential equations (FDEs) have gained importance and popularity recently in their application to physical systems due to their nonlocal properties. This means that the next evolutionary state of the system depends on its historical states not just its current state. Recent investigations have analysed the timefractional Navier-Stokes equation [39] and the SWEs [40] for bespoke systems. Many numerical schemes have been proposed for solving FDEs [41,42] as well as for those applicable to fluid simulation, e.g., the fractional Burgers equation [43], the fractional diffusion equation [44] (relevant to the incompressible Navier-Stokes equation), and the fractional parabolic differential equations [45] (relevant to vorticitystream function formulation of fluids). The efficacy of a numerical algorithm depends upon a number of criteria including stability, accuracy, and ease of implementation. LW methods are, by definition, secondorder accurate in space and time. In general, the choice of the ODE solver in FDTD dictates the temporal order of accuracy: here, a fourth-order Runge-Kutta method was used. However, other techniques can be used such as the leap-frog method. In this respect, FDTD has superior versa-tility since greater temporal accuracy can be achieved. Additionally, FDTD can be easily modified to have a higher spatial accuracy by simply using higher order finite difference approximations of the spatial derivatives used. Extending the LW method in such a way is nontrivial. In comparison to the LW method, FDTD is very easy to implement, particularly when a system of coupled differential equations is considered. For a system of equations, the LW algorithm can become hard to obtain as the Jacobian of the system becomes more involved. With this in mind, it would seem reasonable to opt for FDTD for certain problems, especially when a system of differential equations is being considered, such as for the numerical solution of the shallow water equations. Data Availability The data used to support the findings of this study are included within the article. Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper.
6,690.6
2022-06-02T00:00:00.000
[ "Computer Science" ]
Cartesian Egalitarianism : From Poullain de la Barre to Rancière This essay presents an overview of what I call “Cartesian egalitarianism,” a current of political thought that runs from Francois Poullain de la Barre, through Simone de Beauvoir, to Jacques Ranciere. The impetus for this egalitarianism, I argue, is derived from Descartes’ supposition that “good sense” or “reason” is equally distributed among all people. Although Descartes himself limits the egalitarian import of this supposition, I claim that we can nevertheless identify three features of this subsequent tradition or tendency. First, Cartesian egalitarians think political agency as a practice of subjectivity. Second, they share the supposition that there is an equality of intelligences and abilities shared by all human beings. Third, these thinkers conceptualize politics as a processing of a wrong, meaning that politics initiates new practices through which those who were previously oppressed assert themselves as self-determining political subjects. Devin Zane Shaw because we direct our thoughts along different paths and do not attend to the same things.(VI: 1-2) Looking past the ironic posturing of the first sentence, Cartesian egalitarians (such as Poullain de la Barre or Joseph Jacotot) take this "bon sens" as Descartes' fundamental idea: "there are not several manners of being intelligent, no distribution between two forms of intelligence, and then between two forms of humanity.The equality of intelligences is first the equality of intelligence itself in all of its operations" (Rancière, "L'actualité" 412-413, author's translation). 1The equality of intelligences has one other crucial consequence: if there is no hierarchy of intelligences, then there is no natural or inevitable hierarchy between those who "naturally" rule and those who are ruled. Third, Cartesian egalitarians conceptualize politics as the processing of a wrong through the practice of dissensus.As Rancière writes, "politics becomes the argument of a basic wrong that ties in with some established dispute in the distribution of jobs, roles, and places," initiating "conflict over the very existence of something in common between those who have a part and those who have none" (Disagreement 35).Politics turns on who or what exists, or who or what counts, in common in society; it constitutes a new and more egalitarian distribution of the sensible.Dissensus produces conflict because the logic of policing counts the parts of society as parties with specific interests that can be represented according to customary forms of intelligibility, with no possibility that there would be a void or supplement to society, a part which has no part which is not represented ("Ten Theses" 36).For Rancière, politics takes place when a "part of those who have no part" (la part des sans-part) contests the policing of social relations and conventions in order to introduce new ways of speaking, being, or doing.PhaenEx In this essay, I will examine how these three features of Cartesian egalitarianism emerge from the work of Descartes.Although he makes subjectivity (as cogito) and intellectual equality central components of his philosophy, Descartes nevertheless limits his critique of the prejudices of intellect, authority, and habit to the epistemo-metaphysical problem of separation.After reconstructing how egalitarianism functions in Descartes' system, I will show how Poullain reconceptualizes the problematic of separation in a socio-political context, transforming it into the problem of a wrong.He uses Descartes' dualism to show that there are no natural qualities of the mind or the body that can justify the inequality of the sexes.Women have been wronged, Poullain argues, because there are no clear and distinct reasons for their subjugation.Instead, women have been denied the full capacity for the exercise of their reason due to the political self-interest of men and the force of social convention. After examining Descartes and Poullain, I will shift the discussion to the philosophy of Beauvoir.Though this has the unfortunate effect of setting aside many other developments in the historical relationship between Cartesianism, egalitarianism, and feminism, Beauvoir's conception of the relationship between political subjectivity and the processing of a wrong foreshadows several of Rancière's concerns. 2I will argue that Beauvoir's account of a wrongwhich occurs, for instance, when a woman is forced to assume, and thus limit, her freedom as an "other" rather than a "subject"-marks a significant advance over Sartre's individualist ethics of the 1940s.Because Beauvoir focuses on those whose agency has been historically marginalized, her account of political subjectivity avoids the pitfalls that have so often plagued many strains of Marxism, which amplify the teleological character of the proletariat's historic mission.For Beauvoir, it is not possible to subordinate one struggle to another; a historic mission, as it were, Devin Zane Shaw can only be built out of practices of solidarity, and not out of the hierarchization of demands, abilities, and intelligences. Thus I focus on Beauvoir because her Cartesian egalitarianism is an important precedent to Rancière's.Because Rancière does not, to my knowledge, extensively discuss the work of either Poullain or Beauvoir, I do not intend this as an explication of Rancière's work, but rather as "a history or, if you prefer, a fable" (Discourse VI: 4) that provides an overview of a longer tradition of egalitarianism than is typically acknowledged.I will conclude by showing, through a brief reading of his book The Ignorant Schoolmaster, how Rancière's understanding of Cartesianism emphasizes the egalitarianism of Poullain and Beauvoir rather than Descartes' metaphysical or epistemological commitments. II. Egalitarianism and Separation We do not typically consider Descartes an egalitarian.He is more often interpreted, in the post-Heideggerian tradition of philosophy, as an epochal figure of the modern destiny of metaphysics.On this account, Descartes introduces the metaphysical ground of technicity by dividing all beings between thinking subjects and objects of a calculable objective world. 3Or, following Antonio Negri, he is considered an architect of a "reasonable ideology" that expresses the class compromise constitutive of the formation of bourgeois class power after the 1620s: whereas Descartes formulates his philosophy as the production of human significance (and practical utility) in its separation from the world, the bourgeoisie affirms its position in civil society at the same time that it accepts a temporary class compromise with absolutism (Negri 295-296).PhaenEx Recently, however, several prominent radical thinkers have laid claim to the legacy of the Cartesian subject.For example, Alain Badiou, Rancière, and Slavoj Žižek all hold that the emergence of subjectivity in political praxis is irreducible to the reconfiguration of Cartesian thought as instrumental rationality, whether it is considered as a moment of technological enframing or as a moment of bourgeois compromise. 4Yet the Cartesianism of Badiou and Žižek does not imply the supposition of equality.Instead, their commitment is largely programmatic. Žižek, in The Ticklish Subject, proposes that the Cartesian subject is a revolutionary alternative to what he considers to be the hegemony of a "liberal-democratic multiculturalism" that ranges indiscriminately from new age obscurantism to postmodern deconstruction (1-4).For Badiou, Descartes is a paradigmatic materialist dialectician, insofar as he maintains that truths are eternal against the general presumptions of "democratic materialism," which counts only bodies and languages, and Nietzsche's and Heidegger's "aristocratic idealism," which, despite its overwhelming sense of resignation, aims to preserve the poetic event against modern nihilism (Logics of Worlds 1-6). In contrast to these interpretations, I will argue that Descartes' work wavers between egalitarianism and the constraints of method.On the one hand, the Cartesian project, even for Descartes, requires the supposition of equality in order for its method to be persuasive.That is, rather than appealing to tradition, convention, or authority to establish the validity of his system, Descartes calls for the well-considered and reasonable judgments of his readers.And yet, on the other hand, Descartes makes persistent appeal to the necessity of method to prevent the egalitarianism of his address from encouraging a thoroughgoing critique of all social conventions.Instead, his system is redirected toward epistemological and metaphysical questions, which are structured by the problem of separation. Let us begin with Descartes' supposition of equality.Despite his program of searching out the self-foundational moment of a system, his philosophy is nevertheless conditioned (but not necessarily determined) by its historical situation, or its distribution of the sensible.When Rancière speaks of a distribution of the sensible, it includes the relations between subjects, objects, and places, and the ways of speaking, doing, and being that make these relations intelligible.While policing, in Rancière's terms, is a process of hierarchically arranging these relations and enforcing them, we should not consider a distribution of the sensible as static until politics intervenes; instead, the intelligibility of these relations is also a dynamic, which can itself change, enter into periods of stability, and undergo crises from which a politics of dissensus can emerge (or not). Though it is not a moment of politics in Rancière's sense, Descartes' thought inaugurates a new way of thinking the relations between subjectivity, habit, and intelligibility within an intellectual milieu in transition, in which Scholasticism and Renaissance philosophy have been challenged by a renewed sense of skepticism.This conjuncture is not unique to philosophy, but is itself enmeshed within a series of socio-political upheavals.Negri, for instance, points to the recomposition of class power after the European economic crisis of 1619-1622 and the condemnation of Galileo (112-126; 140-155).In addition, the renewal of skepticism is not only of philosophical interest, but also an expression of the "epistemological implications of cultural difference" brought on by advances in European techniques of travel (to, for instance, China), and the conquest of the Americas (Bordo 40-41).Take, for example, Montaigne's remark that "each man calls barbarism whatever is not his own practice … indeed it seems that we have no other test of truth and reason than the example and pattern of the opinions and customs of the country we live in" (quoted in Bordo 41).PhaenEx In the Discourse, Descartes makes a similar remark: "It is good to know something of the customs of various peoples, so that we may judge our own more soundly and not think that everything contrary to our own ways is ridiculous and against reason (contre raison), as those who have seen nothing of the world ordinarily do" (VI: 6, translation modified). 5He draws two conclusions from this diversity of customs: first, "not to believe too firmly anything of which I had been persuaded only by example and custom" (VI: 10), and second, that the knowledge of cultural difference helps justify the supposition that opens the Discourse, that good sense or reason is "naturally equal in all men" (VI: 2). 6From these two conclusions Descartes proposes a new relationship between subjectivity, habituation, and intelligibility.This new relationship is founded on the cogito or thinking being, which emerges from a method of doubt directed toward habits or practices derived from custom and a discourse of intelligibility established on the authority of the schools.The significance of this critique of convention turns on whether it is conceived as a project of intellectual emancipation, or as the metaphysical and epistemological problem of separation. It is possible, beginning with the Discourse, to read Descartes' project as an exercise in intellectual emancipation.Starting from the premise of the equality of intelligences and abilities, Descartes delineates his method of directing his reason as an example of "self-instruction" for the reader to judge as to whether it is a worthy example for imitation or improvement (VI: 4).It is not a necessary order of reasons, as it is in the Meditations, or an attempt at "teaching," but an account of how Descartes had "tried to direct [his] own" reason (VI: 4).By stressing the egalitarian aspect of this work, we can see that the validity of the subject as thinking being is verified by the capacity for the direction of reason to be repeated through each reader's selfinstruction.The emergence of the cogito transfers authority from the customs of the schools to Devin Zane Shaw all those to whom reason or good sense is distributed-a lesson in the practice of thinking learned from Descartes' travels rather than the schools (VI: 5-6).The intelligibility of the new philosophy-and its foundation, the cogito-is verified through the free use of the reader's own reason, rather than doctrinal authority. Nevertheless, the socio-political consequences and the gestures toward a broader vernacular culture are absent from the Meditations.The contingent emergence of the cogito as a response to a crisis in intellectual authority within the sciences is instead given a metaphysically necessary status, and philosophical inquiry becomes a problem of ascertaining the proper epistemological and metaphysical foundations for physics.The egalitarian moment of the Discourse is now restricted to the problem of the separation of self and world. 7Once we enter into the order of reasons of the Meditations, the situation becomes, as Sartre argues, that of "the autonomous thought which discovers by its own forces intelligible relations between already existing essences" (La liberté cartésienne 289, author's translation). 8t us look at the way that the Meditations recasts the relationship of subjectivity, habit, and intelligibility.The "First Meditation" begins with Descartes's acknowledgement that he has been accustomed since childhood to a method of making judgments that has led to numerous falsehoods, which leads him to suspect that the basis of those judgments-information "acquired either from the senses or through the senses"-is doubtful (VII: 18).This passage carries a double significance.In fact, Descartes wavers between two different accounts of the basis of the judgments he has discovered to be doubtful.Both share the same starting point-the prejudice of relying on the senses has a basis in the habits acquired in childhood-but they differ on how these habits are acquired.One, which I will call the "prison of the body" account, identifies the body as the cause of the prejudices that prevent the proper use of reason. 9In the subsequent PhaenEx history of Cartesian egalitarianism, this metaphysical account of the origin of prejudices is rejected in favor of the second account, which focuses on the socio-political critique of conventions, such as criticisms of Scholasticism. In the Discourse, we find a socio-political critique of the prejudices of childhood.One accepts teaching based on authority and explication, rather than according to reason.Descartes recounts that from "my childhood I have been nourished upon letters, and … I was persuaded that by their means one could acquire a clear and certain knowledge of all that is useful in life" (VI: 4).Books-"letters"-had taught him the basic premise of Scholastic philosophy, that "nihil est in intellectu nisi prius fuerit in sensu," that "nothing is in the intellect unless it was first in the senses" (Carriero 12ff).Rather than understanding the immediacy and intelligibility of the sensible as a naïve standpoint, we could understand it as a product of a determinate (and by Descartes' time, reified) historical production of knowledge, that is, of Scholasticism.In this case, the method of doubt and the emergence of the cogito subject become a challenge to one particular historical system of knowledge, but a persistent vigilance is required to prevent Descartes' thought from being reified into a teaching based on authority, a vigilance evidenced by his repeated references to needing to inculcate new habits of thought against the lures of custom.This is how, broadly speaking, the Cartesian egalitarians will take up his thought. On the other hand, Descartes also faults the body itself for propagating the habits and prejudices of childhood; the body is, on this account, the prison of the soul. 10In the Principles of Philosophy, Descartes writes: In our childhood the mind was so immersed in the body that although there was much that it perceived clearly, it never perceived anything distinctly.But in spite of this the mind made judgments about many things, and this is the origin of the many preconceived opinions which most of us never subsequently abandon.(VIIIa: 22) Devin Zane Shaw On this account, Scholasticism's reliance on the senses as the foundation of knowledge serves to reinforce the prejudices of the body.The task for thinking, for Descartes, is to obtain through the method of doubt a reflexive distance from what we take to be the immediacy of the senses to allow the intellect to mediate our judgments.The task is to separate thinking substance, the cogito, from the mechanisms of the body through a method that makes it possible to discover clear and distinct ideas of "already existing essences," as Sartre puts it-of, for instance, thought, extension, substance, and God (La liberté cartésienne 289, author's translation).Of course, as many commentators have pointed out, it is difficult to see how Descartes can establish a measure to test the truth of a judgment after the introduction of hyperbolic doubt.Even if he can demonstrate the truth of the cogito as a thinking being, it is still possible that he is being deceived about other kinds of knowledge.To overcome the evil genius hypothesis, Descartes proceeds in the "Third Meditation" to attempt a proof of his dependence on a supremely perfect being.This supremely perfect being functions, in the system, as the guarantor of the knowledge that Descartes establishes throughout the rest of the Meditations, gradually returning into his grasp the fields of mathematics, physics, and everyday sense experience (as long as these things are conceived "clearly and distinctly").With thought and extension clearly and distinctly separated, and with their correspondence guaranteed by God, the reconstruction of philosophy from the cogito allows Descartes to introduce a physics that explains bodies and movements according to the general rules of mechanics and mathematics, rather than the Scholastic-or childlike (Descartes, "Sixth Set of Replies" VII: 437-439)-cognition of universals from particular qualities derived from the senses (see Carriero 16-17; Garber 84-88).By establishing the "already existing" essential validity of the separation of thought and extension, Descartes PhaenEx limits the possibility that doubt toward the sensible could open into a socio-historical critiquethat is, that knowledge could be historically situated. Instead, Descartes suggests that Scholasticism lends the errors of the body an artificial veneer of rationality.In the unfinished dialogue "The Search for Truth," Descartes juxtaposes the "natural" use of reason to the "artifice" of Scholasticism.In this text, the greatest threat to knowledge is not the separation of thought and extension and of self and world, because in his system God guarantees that they have an intelligible relationship; the greatest threat is that the good sense of the meditator is captured by the artifice of authority and the schools, that, as in the case of Epistemon the Scholastic, one is lulled into the "habit of yielding to authority rather than lending [one's] ear to the dictates of reason" (X: 523). 11By contrast, Descartes claims that his method begins, through the use of doubt, by inculcating "a judgment which is not corrupted by any false beliefs and a reason which retains all the purity of its nature" (X: 498).The whole rhetorical staging of "The Search for Truth" relies on Eudoxus being able to direct Polyander (a character who has never studied but possesses "a moderate amount of good sense") in this "natural" use of his reason with the aim of discovering the true principles of (Cartesian) philosophy (X: 514).Yet if Cartesianism lays claim to being the "natural" use of reason, then it risks, despite Descartes' protests that he is not attempting to "teach" anyone, repeating the problems that he had identified with Scholasticism: the naturalization of doctrine through the reification of a historically situated knowledge.In Rancière's terms, the intellectual emancipation promised by the Cartesian "ego sum, ego existo" is subordinated to the intellectual policing of method. III. The Rationality of a Wrong The social and political consequences of Descartes' thought were not lost on his contemporaries, especially in the conflicts over the equality of the sexes. 12His account of the egalitarian distribution of reason stands in stark contrast with the Aristotelianism of the Scholastics.Though Aristotle argues that the possession (hexis) of logos (speech and reason) is unique to human animals, implying a universal capacity, he nevertheless proceeds to exclude some humans from this capacity.So while the logos and an aisthesis of justice, the useful, or the good, may be shared (Politics 1253a9-18), this logos is not possessed by all.A slave, Aristotle argues, is he who "by nature" apprehends (aisthesis) speech but does not possess (hexis) it (1254b24-25).Women are similarly dispossessed of reason.At the beginning of the Politics, Aristotle claims that women are made only for the purpose of the reproduction of the species and so are inferior to men.Later, he argues that women are free (1259a40) but are incapable of ruling, for just as the rational part of the soul rules over the irrational, so does a man rule over a woman-who possesses reason without authority (1260a14). Poullain de la Barre appropriates Cartesian philosophy to show, in contrast to Aristotle, that patriarchal social forms possess authority without reason.In his On the Equality of the Two Sexes, Poullain notes that: if something is well established, then we think it must be right.Since we think that reason plays a role in everything men do, most people cannot imagine that reason was not consulted in the setting up of practices that are so universally accepted, and we imagine that reason and prudence dictated them.(54) Using the results of Cartesian philosophy, Poullain argues that the inequality of the sexes-that is, the subjugation of women-is founded on prejudices of habit and custom, and political selfinterest rather than well-founded reasons.Though we have seen that Descartes restricts his PhaenEx system to the epistemological and metaphysical problem of separation, Poullain uses Cartesian philosophy to conceptualize the wrong at the basis of the inequality of the sexes: both popular opinion and scholarly learning dispossess women of subjectivity and the capacity to reason. Poullain's task, then, is to demonstrate how the part of those who have no part-women as they are socially excluded and subjugated-can lay claim to a political subjectivity that they have been denied.This claim begins with undermining the foundations of long-standing prejudices that justify inequality.This process of critique can open the possibility of a more egalitarian distribution of the sensible, in which women are recognized as thinking and speaking subjects, not merely passive objects of men's possession, and as agents who are just as able as men to make public use of their reason. In On the Equality of the Two Sexes, Poullain argues (in a passage that later appears in paraphrase as an epigram to The Second Sex) that the historical and intellectual record shows that: Women were judged in former times as they are today and with as little reason, so whatever men say about them should be suspect as they are both judges and defendants.Even if the charges brought against them are backed by the opinions of a thousand authors, the entire brief should be taken as a chronicle of prejudice and error.(76) To overturn these judgments, Poullain criticizes both "popular" and "learned" prejudices against women.There are, he argues, no natural reasons for the "chronicle of prejudice and error," but only the "reason" of political self-interest.The oppression of women has been enforced by the physical strength of men, and policed by naturalizing a gendered division of labor within the distinction of public and private domains (see his "historical conjectures" at 56-60).This situation is reproduced, he argues, when in both ancient and modern times intellectuals have generally taken "their prejudices with them into the Schools" and worked to give reasons for the subjugation of women (79). Poullain reconceptualizes Descartes' distinction between thought and the body to show that there are neither intellectual nor physical inequalities between men and women.Customary prejudice, he notes, holds that women cannot exercise reason as well as men, often pointing out how women are more passionate or intemperate, how their use of reason is less detached from the body.Poullain turns this argument around to claim that those who maintain this customary prejudice have themselves provided reasons that do not consider the faculties of the mind independently from the body.For, he claims, given that thought is a substance other than body, the mind has no sex, and if the mind has no sex, good sense or reason is equally distributed to both men and women.Moreover, if equality is the case, there is no natural basis for an intellectual division of labor.From the standpoint of well-considered reasons, antifeminists have confused nature and custom: the perceived intellectual flaws of women are the product of a lack of education.In addition, the intellectual stultification of women, Poullain notes, also has a political basis: those who deny that the "scope of reason is boundless and has the same influence over all people" do so out of self-interest, fearing that ending a gender-(and class-) based intellectual division of labor will devalue the prestige and authority that comes with learning (95).Given the numerous prejudices of intellectuals, Poullain even suggests that the exclusion of women from education could work to their eventual advantage because they would be able to direct their natural reason without the artifice of the schools (62-65). 13at the mind has no sex, and that good sense or reason is equally distributed among all humans, are the positions of Poullain that are closest to Descartes.It is more difficult to use the Cartesian system to establish that inequality is not based on embodied differences, given that political thought that seeks to show both how all humans are capable of exercising their freedom, and how, as Beauvoir will write three centuries later, biology is not destiny. IV. Woman as Other, Woman as Subject As is well-known, both Sartre and Beauvoir take the cogito as a starting point for interrogating the freedom of human being.In Beauvoir's words, the "Cartesian cogito expresses both the most individual experience and the most objective truth" insofar as it affirms that human freedom is the basis of all values (Ethics of Ambiguity 17).This much Sartre and Beauvoir have in common, but they diverge concerning how these values can be realized within the social life of the individual.Through the mid-1940s, Sartre remains focused on the problem of how an individual can act freely within a historical situation that is not of his or her own making, and many of his more hyperbolic comments imply that, as long as one is not in bad faith, all choices are equivalent as long as they are free. 18r Beauvoir it is necessary that practices of freedom and the situations that they transform be understood as historically differentiated, so neither situations nor choices are equivalent.This requires Beauvoir to move from an individualist ethics to conceptualizing these concerns from their bases in social perceptions and relations (see Simon 41-54).In The Second Sex, Beauvoir reconceptualizes the question of subjectivity as a political problem, not just in the sense that she examines how a subject assumes her freedom within a historical situation, but also insofar as this question turns on what we have called a wrong: she pursues the consequences of the fact that, despite being an "autonomous freedom," a woman "discovers and chooses herself in a world where men force her to assume herself as Other" (Second Sex 17).Like Poullain, Beauvoir rejects the thesis that there are biological data that necessarily determine, and form the PhaenEx "fixed destiny" of, the subjugation of women within the social hierarchy of the sexes (44). Instead, she argues that all situations are politically and historically conditioned, meaning that all possible biological data take on social values rather than intrinsically natural values that transcend a given situation. 19auvoir therefore turns to the investigation of the historical and political bases of the inequality of those who are able to assume their subjective freedom, and those who-depending on the situation, could be women, African Americans, the colonized, and other groups-confront a historical situation in which they are considered pejoratively as "others."It is a fundamental supposition of existentialism that all human beings have the capacity to exercise their freedom, because freedom is the basis of all social values, and yet in each situation they cannot exercise the full extent of their freedom. Beauvoir politicizes the existentialist account of subjectivity and freedom by conceptualizing how a wrong is introduced into social distinctions between subjectivity and alterity.This wrong occurs because women are constrained by a situation in which men are subjects, and women are others.The distinction between self (or subject) and other, she notes, is not necessarily the basis of a wrong.The category of the other "is as original as consciousness itself" (6).Following Lévi-Strauss, Beauvoir states that the distinction between self and other can designate a relationship of reciprocity (such as that between nature and culture) or opposition and antagonism (between, for instance, two different cultures).But it is quite possible that the well-traveled person can recognize the reciprocity of these two different cultures, which relativizes their concept of alterity-just as Descartes noted that visiting others can reveal how one's own customs are just as arbitrary and locally determined as another's.In such a situation, Devin Zane Shaw alterity is not a negative category, but one through which one's own values are questioned and reconsidered. What is different about the situation of women is that the distinction between men and women carries with it a series of value-laden social judgments: a woman is defined against the standard of man and the man's attributes are given positive values, while women's attributes are considered negatively as flaws or insufficiencies.These values are reinforced because men arrogate to themselves the sole capacity to make such judgments.As Kail writes, women are interpellated in "a specific regime of alterity [that] shows that rulers control the meaning of the situation by setting the very conditions that make relationships possible" (157).Beauvoir produces numerous examples to show how, in such situations, "Humanity is male, and man defines woman, not in herself, but in relation to himself; she is not considered an autonomous being" (Second Sex 5).This situation is prevalent in both social and intellectual life.Take, for instance, Lévinas' account of one's responsibility toward a transcendent Other: Beauvoir sardonically notes that his claim that "alterity is accomplished in the feminine" forgets that a "woman also is consciousness for herself" (6, footnote). Although much of The Second Sex is dedicated to diagnosing and cataloguing how a woman is defined against a masculine standard, Beauvoir also points toward the possibilities of women's emancipation.These possibilities can help us understand how the relationship of subject and other can give rise to a politics that challenges a wrong.First, a wrong occurs if the other's freedom is denied, such as when so-called feminine values-like the myth of the "Eternal Feminine"-are created or upheld by men and utilized to police the "proper" places or practices for a woman.For Beauvoir, it is not enough to attempt to reverse the polarity of value-laden terms. 20Instead, reciprocity must be introduced into the free creation of social values.PhaenEx Second, Beauvoir's politics are universal and egalitarian.She argues that the social hierarchization of recognizing some people as subjects and some people as others is the fundamental basis of inequality.Be it the distinction between men and women, Americans of European descent and African Americans, or the bourgeoisie and the proletariat, "whether it is race, caste, class, or sex reduced to an inferior condition, the justification process is the same" (12).This serves as a reminder that in each case those who rule attempt to demonstrate that there is some natural reason for inequality.Thus a crucial task for politics is to demonstrate that these so-called natural differences are based on social relations.But the dual lesson of Beauvoir's critique of Marxian economism must not be forgotten.First, all struggles emerge from, and are conditioned by, their local and historical situation, which means that they use varied approaches to emancipatory practices.Hence second, the emancipatory aspirations of a people cannot be subordinated to another group's aspirations.One cannot then argue that the historical mission of the proletariat requires that, for instance, women subordinate their demands and practices to those of the proletariat.(Despite the rejection of a teleological concept of historical struggle, one should nevertheless maintain, like Beauvoir, that women's liberation requires the end of their economic exploitation).These various struggles can only be strengthened and reinforced by what Beauvoir calls "reciprocity"-by practices of solidarity that do not reproduce the social hierarchies that these groups are combating.While Sartre's existential appropriation of the cogito marks him as a Cartesian, Beauvoir is a Cartesian egalitarian.Her conceptualization of political subjectivity follows Descartes and Poullain insofar as it affirms that reason or good sense is equally distributed to all people-far from a world of sovereign subjects and their inferiors, she proposes a political program that instills the reciprocity of practices of freedom.Her existentialism begins with the individual, but it demands that the individual aims toward accomplishing practices of reciprocity and freedom that expand "toward an open future" (16). V. Toward Collective Egalitarianism This history or fable of the egalitarianism that leads from Descartes to Beauvoir shows that "Cartesianism" cannot be reduced to several of its more prominent conceptual commitments, such as mind-body dualism, the technicity of its mechanistic physics, or even, as we will see with Rancière's critique, the so-called rigors of the method.Cartesianism is also defined by its egalitarianism: the formation of a political subject from the supposition of the equality of intelligences.This subject is political insofar as its praxis turns on the processing of a wrong, an egalitarian challenge to the inequalities of any social order.As we have seen, Cartesianism from Poullain to Beauvoir constitutes a direct challenge to the claim that there is a hierarchy of intelligences. If egalitarianism is a key component of this kind of Cartesianism, then it becomes possible to see why Rancière argues, in Disagreement, that "Descartes's ego sum, ego existo is the prototype of such indissoluble subjects of a series of operations implying the production of a new field of experience" (35).This new field of experience-what we could call, following Beauvoir, the collective reciprocity of equality-is opened when the part of those who have no part engage in political practice, challenging a social distribution that counts them as inferiors or subordinates, such as the count, challenged by Poullain and Beauvoir, that women are others and not subjects of freedom.Without this aspect, it is difficult to see why Rancière lays claim to the Cartesian legacy, for he directly challenges, through a discussion of Joseph Jacotot, many of Descartes' epistemological and metaphysical assumptions.PhaenEx struggles cannot be subordinated to one another, but must be reinforced through solidarity, by whatever they have in common. A Marxist analysis would certainly note that Cartesianism in general stakes its validity on an individual's perception of the social world and an individual's practice.Rancière (like Beauvoir before him) sees the limitations of individualism, and that is why he transforms the subjective ego sum, ego existo into the reciprocity of the nos sumus, nos existimus, with the latter designating a new field of collective political practice.But the Cartesian "prototype" has two positive aspects.First, like Marx, as I have argued above, Poullain and Beauvoir conceptualize politics as the processing of a wrong.But second, and in contrast to the teleological tendencies of Marxism, there is no privileged subject, no finality that drives history, but only the persistently renewed struggle against all forms of social inequality.Emancipation is not an end point of a historical continuum.Instead, emancipation is only possible through the efforts of those who combat inequality and oppression through practices of reciprocity and solidarity.In this sense, Rancière is a Cartesian egalitarian. Notes Devin Zane Shaw knowledge a foundation that will be in accord with it" (What is a Thing?103).This foundation, of course, is the cogito. 4Badiou argues that a Marxism "sutured" to the scientific condition of philosophy (read: Althusser) dovetails theoretically with Heidegger when it reduces the subject to "a simple operator of bourgeois ideology."The scientific Marxist, then, would say: "for Heidegger, 'subject' is a secondary elaboration of the reign of technology, but we can see eye to eye if this reign is in fact also the bourgeoisie's" (Manifesto for Philosophy 92).It should be noted that Negri is more ambivalent than Badiou's typical "scientific Marxist"; he seems to admire the revolutionary character of Descartes' thought even if he reproaches what he sees as its fundamental compromise. 5 Later in the Discourse, Descartes writes: "I have recognized through my travels that those with views quite contrary to ours are not on that account barbarians or savages, but that many of them make use of reason as much or more than we do" (VI: 16). 6 Aimé Césaire, in his Discourse on Colonialism, invokes the principles of Cartesianism against the false universality of the colonial legacy (its science, politics, and sociology), which denigrates the non-European to the benefit and "glory" of Western bourgeois society.He argues that "the psychologists, sociologists et al., their views on 'primitivism,' their rigged investigations, their self-serving generalizations, their tendentious speculations, their insistence on the marginal, 'separate' character of non-whites," rest on "their barbaric repudiation, for the sake of the cause, of Descartes's statement, the charter of universalism, that 'reason … is found whole and entire in each man,' and that 'where individuals of the same species are concerned, there may be degrees in respect of their accidental qualities, but not in respect of their forms, or natures'" (56). 19Interestingly, for Beauvoir childhood plays a crucial role in the habituation of social roles, values, and prejudices.In The Ethics of Ambiguity, she writes that "Man's unhappiness, says Descartes, is due to his having first been a child" (35), but her reading of Descartes in this respect bears more similarities to Poullain, who challenged the reification and "inevitability" of social convention, than to Descartes, who wavered between faulting as the origin of prejudice and habit either social reification or the natural composition of the body itself.Beauvoir criticizes both the Freudian determination of penis envy from an anatomical lack (Second Sex 287) and (implicitly) Sartre's bizarre claim that a woman's existence "in the form of a hole" is first grasped in the infant's "ontological presentiment" of sexuality (Being and Nothingness 782), because childhood must also be historically situated.
8,748.6
2012-05-26T00:00:00.000
[ "Philosophy" ]
Identification of alpha-spectrin domains susceptible to ubiquitination. Previously, we demonstrated that α-spectrin is a substrate for the ubiquitin system and that this conjugation is a dynamic process (Corsi, D., Galluzzi, L., Crinelli, R., and Magnani, M. (1995) J. Biol. Chem. 270, 8928-8935). In this study, we mapped the sites of ubiquitination on erythrocyte α-spectrin. A peptide map of digested α-spectrin, previously submitted to in vitro 125I-ubiquitin conjugation, revealed the presence of four distinct labeled bands with Mr 40,000, 36,000, 29,000, and 25,500. Western blotting experiments using antibodies against each α-spectrin domain revealed that only IgG anti-αIII domain recognized the 125I-labeled ubiquitin peptide of 29 kDa, whereas the IgG anti-αV domain recognized the Mr 40,000 125I-ubiquitin-labeled peptide. The other two labeled bands of Mr 36,000 and Mr 25,500 were identified as tetra and tri multiubiquitin chains. Ubiquitination of the αIII and αV domains was further confirmed by anti-α-spectrin domain immunoaffinity chromatography. Endoprotease Lys C-digested spectrin conjugated previously to 125I-ubiquitin was incubated with antibodies against each trypsin-resistant domain of α-spectrin. Gamma counting of the radiolabeled antigen-antibody complexes purified by protein A chromatography showed labeling in the IgG anti-αIII and anti-αV complexes alone. Domain αIII is not associated with any known function, whereas domain αV contains the nucleation site for the association of the α and β chains. Ubiquitination of the latter domain suggests a role for ubiquitin in the modulation of the stability, deformability, and viscoelastic properties of the erythrocyte membrane. Ubiquitin (Ub), 1 a 76-amino acid protein, has been found both free and covalently bound to target proteins via an isopeptide linkage between the carboxyl group of the terminal glycine moiety of Ub and free ⑀ amino groups on the target protein (1). Rabbit reticulocyte fraction II (2) (protein adsorbed to DEAE 52-cellulose and eluted with 0.5 M KCl) contains the enzymatic system (E1, E2 s, E3 s) that is involved in ubiquitin conjugation. Usually, a protein can have one or more sites for ubiquitin, in which one ubiquitin or multiubiquitin chains can be linked. In the latter case, one Ub is linked to the lysine 48 of another ubiquitin bound to the substrate (3). Ub-conjugated proteins can either be degraded to small peptides by a large 26 S ATP-dependent protease complex, or the Ub moiety can be removed by Ub isopeptidases, releasing ubiquitin and the intact protein (4). Protein ubiquitination is a posttranslational process involved not only in protein degradation but also in other cellular functions. In fact, many studies have reported the in vivo presence of several stable Ub-protein conjugates that are not subject to degradation (5,6). Moreover, another linkage in which ubiquitin is linked to a previously bound Ub involves lysine 63 of Ub, and these chains serve nonproteolytic functions (7). In general, ubiquitin is involved in different cellular processes such as transcriptional regulation (8), cell cycle regulation (9), stress responses (10), and modulation of the immune response (11). In particular, in the last decade Ub has been found to be bound to many specific substrates such as lysozyme (12), phytochrome (13), actin (14), histone (15), calmodulin (16), cAMP-dependent protein kinase (17), p53 (18), ABC-transporter Ste 6 protein (19), c-jun (20), transducin (21), T-cell antigen receptor (22), nuclear factor-B1 precursor (23), inhibitor of B-␣ (24), platelet-derived growth factor-␤, and ␣ receptors, and epidermal growth factor, colony-stimulating factor-I, fibroblast growth factor (25), cystic fibrosis transmembrane conductance regulator (26), and estrogen receptors (27), as well as others. Recently, we demonstrated that red blood cell ␣-spectrin is also a specific substrate for ubiquitination (28). ␣and ␤-spectrin are the major protein constituents of the red blood cell membrane skeleton and contribute to about 25% of total red cell membrane proteins. This membrane skeleton provides support for the overlying lipid bilayer and contributes to the viscoelastic properties and deformability of the membrane (29). The spectrin molecule is composed of two subunits with apparent molecular masses of 240 kDa (␣-spectrin) and 220 kDa (␤-spectrin), intertwined side-to-side to form a heterodimer. The two ␣ and ␤ subunits of spectrin are different not only in molecular mass but also in constitution. In fact, the ␣-spectrin chain consists of series of 22 repeats of 106 residues, whereas the ␤ chain consists of 17 repeats of 106 residues (30). The tryptic digestion of spectrin gave evidence of a linear disposition of trypsin-resistant domains (31). Five domains were defined on the ␣ chain (␣I to ␣V from the N terminus) and four on ␤ chain (␤I to ␤IV from the C terminus). The heterodimer ␣-␤ assembly requires a specific nucleation site located in the ␣V and ␤IV domains (32). A self-association between the N-terminal region of the ␣ chain and the C-terminal region of the ␤ chain is involved in the formation of a tetramer, which is the predominant form in the erythrocyte (33). Moreover, spectrin forms noncovalent associations with other proteins of the cytoskeleton, such as band 2.1 (34), band 4.1, and actin (35). Other proteins such as adducin, tropomyosin, tropomodulin, and dematin function as accessory proteins of spectrin-actin junctions and are probably involved in the stabilization of spectrin-actin complexes (36). Furthermore, spectrin and protein 4.1 interact through phosphatidylserine with the inner leaflet of the lipid bilayer (37). In an attempt to gain insight into the potential biological role of ␣-spectrin ubiquitination, we searched for the site(s) of ubiquitination present on ␣-spectrin. The data reported in this study show that ubiquitination occurs on the ␣III and ␣V domains of ␣-spectrin, suggesting that at least ubiquitination of the ␣V domain can play a role in cytoskeleton stability mediated by the ␣-␤-spectrin nucleation site. EXPERIMENTAL PROCEDURES Materials-Ubiquitin, chloramine T, and many biochemical reagents were obtained from Sigma. Reticulocyte fractions were prepared as reported previously (28). Immobilized protein A was obtained from Pierce. The ECL Western blotting detection reagents, Hybond N nitrocellulose and carrier-free Na 125 I, were from Amersham Corp. Endoprotease Lys C was from Boehringer Mannheim. Ubiquitin Labeling-Reductive methylation of ubiquitin was carried out as described by Hershko and Heller (38). Native ubiquitin and methylated ubiquitin (meUb) were radiolabeled with carrier-free Na 125 I (Amersham Corp.) by the chloramine-T method (39). The specific activity obtained was 9400 cpm/pmol of ubiquitin for 125 I-Ub and 9000 cpm/pmol of ubiquitin for 125 I-meUb. Electrophoresis and Western Blotting-SDS-polyacrylamide gel electrophoresis (PAGE) was carried out according to the method of Laemmli (40) as reported previously by Corsi et al. (28).The molecular mass standards used were 94, 66, 45, 31, 21, and 14 kDa (Pharmacia Biotech Inc.). Thirty-five g of sample protein were loaded for each lane, unless otherwise indicated. The gels were electroblotted according to Towbin et al. (41) using Hybond N nitrocellulose. Blots involving 125 I-Ub spectrin peptides were first dried and then exposed to obtain an autoradiographic film of the nitrocellulose. After membrane rehydration, the different lanes were cut and incubated with different polyclonal IgG against each ␣-spectrin domain (42). Goat anti-rabbit IgG horseradish peroxidase conjugate (Bio-Rad) was used at a 1:3000 dilution as a second antibody. Enhanced chemiluminescence (ECL; Amersham Corp.) was used as the detection system. Assay of Ubiquitin Conjugation-Human red blood cell membranes were prepared from healthy volunteers according to Corsi et al. (28). The conjugation of 125 I-Ub to red cell membrane proteins was assayed as described previously (28) using 5 M 125 I-Ub or 125 I-meUb (final concentration) for each incubation mixture and 4-(2-aminoethyl)-benzenesulfonyl fluoride hydrochloride (AEBSF) as antiproteolytic agent. The conjugation of 125 I-Ub to brain fodrin was assayed in a similar way. Crude Spectrin Extraction-After 120 min of incubation at 37°C to permit 125 I-Ub conjugation, the reaction mixture was centrifuged in an Eppendorf microcentrifuge at 16,000 ϫ g for 15 min at 4°C. The supernatant was removed, and the pelleted membranes were washed twice with phosphate-buffered saline, pH 7.4, containing 1 mM PMSF. The membranes were then washed an additional two times with a low ionic strength buffer containing 0.3 mM Na 2 HPO 4 , pH 8.5, 0.1 mM EDTA, 1 mM PMSF, and 0.2 mM AEBSF (extraction buffer) (43). The pellet thus obtained was resuspended in a small volume of the extraction buffer (1 ml/5 mg of membrane protein) and incubated at 37°C for 30 min. The samples were then centrifuged at 200,000 ϫ g in a SW 65 rotor for 30 min at 4°C, and the supernatant was collected. This fraction, primarily 125 I-Ub-␣-spectrin, ␣-spectrin, and ␤-spectrin chains, actin, and traces of band 4.1, was referred to as "crude spectrin." Total protein was adjusted to 1 mg/ml with extraction buffer. Before protein determination, crude spectrin was stored on ice at 4°C overnight in 50 mM NaCl and 5 mM Na 2 HPO 4 , pH 8.0 (final concentration). Isolation of Ubiquitinated ␣-Spectrin Peptides-Crude spectrin extracted as above was dialyzed overnight at 4°C using dialysis tubing with a molecular weight cut-off of M r 3500 against 2000 volumes of 10 mM Tris-HCl, pH 8.5, and 1 mM EDTA to eliminate PMSF and AEBSF. Five g of endoprotease Lys C were resuspended in 50 l of H 2 O and added 1:10 (l/g of protein sample) to crude spectrin. The digestion was for 1.5 or 3 h at 28°C and was stopped by adding 0.2 mM AEBSF and sample buffer 1:1 (v/v), after which the sample was boiled for 5 min. Thirty-five g of protein sample were used for each lane of SDS-PAGE, and the relative autoradiogram of the gel was obtained. Quantitative determinations of radioiodinated proteins were performed by direct counting of the excised bands using a Beckman 5500 gamma counter. One hundred and seventy-five g of spectrin digestion products (35 g for five lanes) were used for Western blotting analysis. The nitrocellu-lose was used first to obtain the autoradiograms and subsequently was cut in five lanes and probed with IgG specific for each of the five different domains of ␣-spectrin. Immunoprecipitation of 125 I-Ub-␣-Spectrin Peptides with Different IgG Anti-␣-Spectrin Domains-125 I-Ub was conjugated to human red blood cell membranes for 2 h at 37°C in the presence of rabbit reticulocyte fraction II as described by Corsi et al. (28). Spectrin extraction and digestion were performed as described above. The digestion was stopped with 0.2 mM AEBSF, 1% (w/v) SDS, and boiling for 10 min. The sample was diluted 100-fold with 10 mM Tris-HCl, pH 7.5, divided into 5 aliquots, and then separately incubated with different antibodies against each domain of ␣-spectrin (␣I to ␣V) previously buffered in 10 mM Tris-HCl, pH 7.5 (final concentration). A 5.5-fold excess of specific anti-␣-spectrin-domain IgG (mol/mol) was used to form the antigenantibody complex. The antigen-antibody mixtures were left for 2 h at room temperature and then overnight at 4°C with gentle agitation. Five ml of immobilized protein A were equilibrated with 10 volumes of 10 mM Tris-HCl, pH 7.5, then with 3 volumes of the same buffer plus 1 mg/ml bovine serum albumin, and finally with 10 volumes of 10 mM Tris-HCl, pH 7.5. The protein A suspension was divided into 1-ml aliquots, each of which was incubated overnight at 4°C with the different mixtures of antigen-antibody complexes formed as above. The 5 aliquots of protein A suspension with bound antigen-IgG complexes were packed in five different columns and washed with 10 volumes of 10 mM Tris-HCl, pH 7.5, until protein determination at 280 nm was zero. Five different eluates were obtained from the columns with 0.2 M glycine, pH 2, and then counted in a gamma counter. Other Determinations-Protein concentrations were determined by the method of Bradford (45) using bovine serum albumin as standard or spectrophotometrically at 280 nm. Isolation of Ubiquitinated ␣-Spectrin Peptides-␣-Spectrin is one of the most abundant membrane proteins (12.5%) in the erythrocyte membrane (29). It is known that it can be digested with trypsin into five trypsin-resistant domains of different molecular weights known as ␣I, ␣II, ␣III, ␣IV, and ␣V (31). In an attempt to identify the site(s) of ubiquitination on membrane ␣-spectrin, we first performed an in vitro ubiquitination assay using human erythrocyte membrane fraction II as a source of ubiquitin-conjugating enzymes and 125 I-Ub. After two washing steps, crude spectrin was extracted as described under "Experimental Procedures." In a first approach, extracted crude spectrin was digested with trypsin at 1 mg/ml, 1:50 (l/g of crude spectrin). Unfortunately, trypsin digestion of 125 I-ubiquitinated spectrin produces many bands with low labeling radioactivity. Furthermore, trypsin was also found to be able to digest 125 I-Ub itself (46) and to cut polyubiquitin chains (data not shown). Thus, spectrin peptide patterns were produced using endoprotease Lys C instead of trypsin, and all of the data reported hereafter in this report were obtained with this proteolytic enzyme. The production of digested peptides was time-dependent. Endoprotease Lys C did not cut ubiquitin and produced four highly ubiquitinated bands of low molecular weight (M r 40,000, 36,000, 29,000, and 25,500) as detected by autoradiography (Fig. 1B, lanes 1 and 2). Quantitative determinations obtained by gamma counting of excised radiolabeled bands showed that the M r 36,000 and 25,500 peptides had an associated radioactivity four and three times higher than the M r 40,000 and 29,000 bands (Fig. 1C). Identification of ␣-Spectrin Ubiquitin Binding Sites-To identify the site(s) of ubiquitination on erythrocyte ␣-spectrin, we performed the experiment represented in Fig. 2A. The conjugation of 125 I-Ub to ␣-spectrin was obtained in a cell-free system using fraction II as a source of ubiquitin-conjugating enzymes and spectrin extraction from the membrane was performed as described above. 125 I-Ub-␣-spectrin was submitted to endoprotease Lys C digestion. One fraction of the sample (35 g) was analyzed by SDS-PAGE and autoradiographed, whereas another fraction of the sample was divided into five aliquots (each of 35 g), processed for SDS-PAGE on five different lanes, and Western blotted. The nitrocellulose membrane was dried and autoradiographed (Fig. 2B, odd numbers). The five lanes were then cut and probed separately using five different antibodies against each domain of ␣-spectrin (Fig. 2B, even numbers). The autoradiograms of the nitrocellulose membranes and the films obtained by ECL were then overlapped to observe the relative positions of the antibody-recognized peptides compared to those of the radiolabeled peptides. The autoradiogram of the gel (Fig. 2B, lane B) shows four different radioiodinated bands with molecular weights of M r 40,000, 36,000, 29,000, and 25,500, as shown previously in Fig. 1B, lanes 1 and 2. The radioiodinated peptide of M r 40,000 (Fig. 2B, a) was specifically recognized by IgG anti-␣V (Fig. 2B, lanes 9 and 10), whereas the second radioiodinated peptide of M r 29,000 (Fig. 2B, c) was recognized by IgG anti-␣III (Fig. 2B, lanes 5 and 6). The two strongly radioiodinated peptides of M r 36,000 and 25,500 present in the autoradiograms were not recognized by any IgG. According to the minimal stoichiometry, one ubiquitin molecule is bound to each of the two bands of M r 40,000 and 29,000; therefore, the molecular weights of the unconjugated peptides recognized by IgG anti-␣III and anti-␣V are most likely M r 31,000 and 20,000, respectively. Presence of Polyubiquitin Chains in ␣-Spectrin Ubiquitination-An experiment identical to that described above ( Fig. 2A) was performed using 125 I-meUb instead of 125 I-Ub. This particular ubiquitin derivative, although a substrate for the ubiquitin-conjugating system, is unable to form multiubiquitin chains (38). After SDS-PAGE, the digested 125 I-meUb-␣-spectrin was processed for Western blotting analysis as in the experiment described above (Fig. 2A). In the autoradiogram (Fig. 3, lane B), the two bands of M r 40,000 and 29,000 were present as in the experiment with 125 I-Ub (Fig. 1B, lanes 1 and 2). An additional band of M r 38,000 was also found. The IgG against ␣V domain recognized the ubiquitinated peptide of M r 40,000 (Fig. 3, a), whereas the IgG anti-␣III recognized the ubiquitinated peptides of M r 29,000 (Fig. 3, c) and M r 38,000 (Fig. 3, b), confirming the data obtained with 125 I-ubiquitin. Moreover, the M r 38,000 peptide plus the peptide of M r 29,000 contained the same radioactivity found in the M r 29,000 band of the previous experiment in which 125 I ubiquitin was used. Coomassie Blue staining of digested spectrin did not reveal any difference using either the 125 I-Ub or 125 I-meUb derivatives in the conjugation assay. However, some differences in the M r 40,000 range were evident in the films obtained by probing the membranes with IgG anti-␣III and anti-␣IV (Fig. 2B, lanes 6 -8; Fig. 3, lanes 6 -8). These minor differences were not further investigated and could be due to different conformations of ␣-spectrin when Ub chains are bound, as well as to the relative proteolytic susceptibility of multiubiquitinated versus monoubiquitinated ␣-spectrin. Interestingly, when 125 I-meUb was used, the two radioiodinated bands of M r 36,000 and 25,500 were no longer present. Thus, it must be concluded that when using fraction II from rabbit reticulocytes, the cytoskeletal protein ␣-spectrin is multiubiquitinated, at least in vitro, and that the peptides of M r 36,000 and 25,500 correspond to tetra and tri multiubiquitin chains. Immunoprecipitation of Ubiquitinated ␣-Spectrin Peptides-To directly demonstrate that ubiquitin is bound to the ␣III and ␣V domains of ␣-spectrin, we used a second approach, as described in the scheme of Fig. 4A. Endoprotease Lys Cdigested spectrin was divided into five identical aliquots, each of which was incubated with IgG against each domain of ␣-spectrin. Five columns of immobilized protein A were used to retain IgG. Free ubiquitin and digested peptides not recognized by IgG were removed during the washing steps of the columns. The proteins retained and eluted from the protein A columns were collected and counted in a gamma counter. As shown in Fig. 4B, eluates from columns receiving IgG anti-␣I and anti-␣II domains showed a very low radioactivity, probably due to some nonspecific interaction between free 125 I-Ub and the col- FIG. 2. Identification of ␣-spectrin ubiquitin binding sites. A, scheme of the procedure used. Erythrocyte membranes (900 g) were incubated with 125 I-Ub (5 M) in the presence of ATP (3.5 mM) and fraction II (800 g) in a final volume of 3.1 ml of incubation mixture. The membranes were then centrifuged at 16,000 ϫ g and washed twice with phosphate-buffered saline, pH 7.4, containing 1 mM PMSF and 0.2 mM AEBSF to eliminate fraction II proteins and unbound 125 I-Ub. Crude spectrin was extracted with a low ionic strength buffer. Extracted crude spectrin was then dialyzed to eliminate antiproteolytic agents, and endoprotease Lys C was added. After 3 h of digestion, polypeptides of different molecular weights were obtained. A portion of the digestion products of crude spectrin was separated by SDS-PAGE and analyzed by autoradiography. Another part of the digestion products was divided into five aliquots and used for Western blotting analysis. An autoradiogram of the umn. Eluate from the column with bound IgG anti-␣IV domain showed a higher radioactivity than the latter eluates, whereas only the eluates from columns with bound IgG anti-␣III and anti-␣V domains showed a strong radioactivity. This experiment provides direct proof that the ubiquitinated peptides of ␣-spectrin are specifically recognized by IgG anti-␣III domain and anti-␣V domain of ␣-spectrin. Thus, it must be concluded that ubiquitin binding sites of ␣-spectrin are present in these two domains. Ubiquitination of Non-Erythroid Spectrin-like Protein-Brain spectrin (fodrin) was used as substrate for an in vitro conjugation assay using 125 I-Ub in the presence of rabbit reticulocyte fraction II. Conjugation was stopped at 120 min with sample buffer, and the sample was boiled for 5 min and electrophoresed in SDS-polyacrylamide gels, stained, dried, and autoradiographed. No radioactive bands were found at the expected molecular weight of fodrin (M r 260,000 and 225,000), indicating that this protein is not ubiquitinated, at least in vitro. DISCUSSION Spectrin is the principal component of the erythrocyte membrane skeleton and plays a dominant role in determining such mechanical properties of the erythrocyte as elasticity and deformability (47). Membrane equilibrium depends on the struc-tural integrity of the skeletal proteins and on normal molecular interactions between the cytoskeletal proteins and membrane. Moreover, the binding of cytosolic components such as enzymes and hemoglobin to cytoskeletal proteins can play a role in membrane stability. Among the factors that may serve as regulators of cytoskeletal organization is protein phosphorylationdephosphorylation (36). ␤-Spectrin has been reported to be a substrate for cytosol and membranous casein kinases. In particular, this chain contains a cluster of six phosphorylation sites. Phosphorylation of spectrin has been shown not to affect either dimer-dimer associations (48) or spectrin binding to ankyrin in vitro (49). However, phosphorylation affects spectrin inextractability from "inside-out" vesicles (50) and modulates the mechanical function and stability of the intact membrane structure (51). Other mediators, such as Ca 2ϩ and calmodulin, can also regulate membrane stability (52). Interestingly, free calmodulin can be ubiquitinated in a Ca 2ϩ -dependent manner and subsequently degraded, a process which could act as a control mechanism for all free calmodulin in excess (53). Recently, we described and characterized a new posttranslational modification of erythrocyte ␣-spectrin in which ubiquitin binds covalently to the ␣-spectrin chain (28). In this report, using different approaches, we demonstrate the existence of two binding sites for ubiquitin on ␣-spectrin. nitrocellulose membranes was obtained, after which each of the five nitrocellulose membrane lanes was probed with an antibody against each domain of ␣-spectrin. Goat anti-rabbit IgG horseradish peroxidase conjugate (Goat anti-rabbit HRP conjugate) was used as second antibody and ECL as the detection system. Overlapping of the autoradiograms and films obtained by ECL was used to determine which antibody was able to recognize the radiolabeled polypeptides. B, immunochemical identification of ␣-spectrin ubiquitin-binding sites. One fraction (35 g) of 125 Iubiquitinated ␣-spectrin digest was processed for SDS-PAGE, stained with Coomassie Blue (lane A) and autoradiographed (lane B). The other fraction was divided into five aliquots (each of 35 g), processed for SDS-PAGE, and electroblotted onto five different nitrocellulose membranes. Nitrocellulose membranes were dried and autoradiographed (odd numbers). Then each of them was probed with a specific antibody against each of the five ␣-spectrin domains (even numbers). Dilution of the first antibody was: IgG anti-␣I domain, 1:7,500; IgG anti-␣II domain, 1:10,000; IgG anti-␣III domain, 1:5,000; IgG anti-␣IV domain, 1:40,000; and IgG anti-␣V domain 1:7,500. The autoradiograms and the films obtained by ECL were overlapped to identify the ubiquitinated peptides. a, radioiodinated peptide of M r 40,000 specifically recognized by IgG anti-␣V (lanes 9 and 10). c, radioiodinated peptide of M r 29,000 recognized by IgG anti-␣III (lanes 5 and 6). Digestion with endoprotease Lys C of ubiquitinated spectrin revealed the presence of two 125 I-Ub peptides of M r 40,000 and 29,000. As shown by Western blotting, these radiolabeled peptides were recognized by polyclonal IgG anti-␣V domain and anti-␣III domain, respectively, indicating that these two peptides are the sites of ubiquitination on red blood cell ␣-spectrin. It could be speculated that ␣-spectrin ubiquitination on domains III and V may play a role in membrane stability as found for other mediators of red blood cell membrane. The ␣III domain is not associated to any known function, whereas the ␣V domain contains the nucleation site for association with the ␤ chain (32) and is involved in Ca 2ϩ binding (54) and, with the ␤IV domain, participates in the interaction with actin and protein 4.1 (55). Interestingly, the ␣V domain is involved in the ubiquitination of ␣-spectrin and contains repeats 20 through 22, which exhibit atypical features. In fact, there is insertion of several amino acids into the repeats 20 and 21. Moreover, repeat 22 has a reduced homology to a typical spectrin repeat. Moreover, the nucleation site present in the ␣V domain is not only responsible for the initial ␣-␤ spectrin binding but also controls the side-to-side register of the many homologous repeats in both subunits. An unusual feature of the nucleation regions is that three of the repeats (two in the ␣ and one in the ␤ subunits) have an eight-residue insertion in the normal 106residue repeat unit (32). These eight-residue insertions, which contain a lysine residue, might confer unique conformational properties upon the nucleation site, and the ubiquitination of domain ␣V might play a role in this context. The erythroblastto-erythrocyte maturation process is accompanied by changes in the composition and properties of the plasma membrane. Furthermore, mature erythrocytes are incapable of ubiquitin/26 S proteosome-dependent degradation (56). Thus, ubiquitination of ␣-spectrin could play two different roles during erythrocyte maturation. In erythroblasts, the amount of ␣-spectrin synthesized exceeds by more than 3-fold the amount assembled on the membrane, and the excess unassembled peptides are rapidly degraded (57). It could be speculated that the binding of ubiquitin to ␣-spectrin in erythroblasts involves subsequent degradation. It is important to note that the sequence of ␣-spectrin contains a glutamic residue in the first position that may act as a secondary destabilizing residue according to the N-end rule (58) when spectrin is in the unassembled form. The second determinant, a specific internal lysine residue, could be the first lysine located at position 5 or 15 (domain I) of the ␣-spectrin sequence. Excess hemoglobin subunits are subject to an analogous targeted degradation in thalassemia (59). Because, as shown in this report, ubiquitination occurs in mature red blood cells on the ␣III and ␣V domains, and thus quite far from the ␣I domain (moreover, these cells are incapable of ubiquitin/26 S proteosome-dependent degradation), the ubiquitination process of assembled ␣-spectrin probably has a different role in these cells than in erythroblasts. Ubiquitin itself and/or multiubiquitin chains could have a potential function as conformation-perturbing devices when conjugated to cytoskeletal proteins, given their orientational flexibility and reversibility (60). Thus, we suggest that the ubiquitination of ␣-spectrin in mature erythrocytes should be considered a new posttranslational event with a regulatory role in spectrin function rather than a signal for ␣-spectrin degradation. In fact, ubiquitination is a dynamic process (28), the linkage is covalent, and a significant amount (3% of the total ␣-spectrin chain) is continuously ubiquitinated in erythrocytes. As reported previously for globin-spectrin complex formation during erythrocyte senescence (61), such an amount could account for a significant change in membrane deformability. We also investigated whether other proteins belonging FIG. 4. Immunoprecipitation of 125 I-Ub-␣-spectrin peptides. A, scheme of the procedure used to immunoprecipitate the 125 I-ubiquitinated ␣-spectrin digest. Conjugation of 125 I-Ub and human erythrocyte membrane spectrin extraction and digestion were performed as described under "Experimental Procedures." The digestion was stopped with 0.2 mM AEBSF, 1% (w/v) SDS and boiled for 10 min. The sample was then diluted 100-fold to lower the SDS concentration to 0.01%, divided into five aliquots of 35 g each, and incubated with IgG against each domain of ␣-spectrin. A 5-fold excess (mol/mol) of specific IgG with respect to each trypsin-resistant domain was used. Each sample was incubated overnight with protein A, before loading antigen-IgG-protein A complexes into five different columns and washed with 10 mM Tris-HCl, pH 7.5. Five different eluates were obtained using 0.2 M glycine, pH 2.0, and counted in a gamma counter. B, the histogram shows the result obtained by gamma counting of eluates from the five columns of immobilized protein A. The x axis indicates which domain was immunoprecipitated using specific polyclonal IgG. The values are the mean of three experiments; bars, S.D. to the spectrin superfamily are ubiquitinated. Brain ␣-fodrin was not found to be a substrate for ubiquitin in an in vitro assay. This non-erythroid spectrin and erythroid ␣-spectrin have very similar sequences (54% identity) throughout their entire length (62), but interestingly, fodrin at its C-terminal has an atypical sequence of 150 residues in repeat 22 (␣V domain), and the identity of the 37 residues at the very Cterminal is less than 10%. Moreover, fodrin differs considerably from erythrocyte ␣-spectrin in repeats 11 and 12 (␣III domain) and possesses a calmodulin-binding site in the latter repeat that is absent in ␣-spectrin (63). Thus, erythrocyte ␣-spectrin and brain fodrin differ mainly in the domains found to be susceptible to ubiquitination. It would be interesting to examine whether ␣-actinin, another protein of the spectrin superfamily, is ubiquitinated. In particular, repeats 20 through 22, together with the nonrepeat C terminus of ␣-spectrin, are highly homologous with the C terminus of ␣-actinin (64). Preliminary studies now in progress in our laboratory indicate that ␣-actinin is ubiquitinated, at least in vivo. To date, no information is available on the function of the ␣III domain of ␣-spectrin, but because the carboxyl terminus of the ␣-spectrin subunits (␣V domain) is involved in the binding of the ␤-spectrin chain to form the ␣-␤ dimer and is an important site for many mediators in red blood cells, the ubiquitination of this cytoskeletal protein may be of physiological significance.
6,655.6
1997-01-31T00:00:00.000
[ "Biology", "Chemistry" ]
Pollution prevention via recovery of cerium (IV) oxide in optics company Pollution prevention methods were applied at an optics manufacturer in an effort to improve recovery of a valuable polishing component, cerium oxide (ceria), 77% of which was lost to dragout and sewer discharge. Centrifugation and microfiltratiion were evaluated to develop a process that would increase recovery of used ceria, which would then be sent back to the ceria supplier for reclamation and reuse. Full-scale implementation included a high-speed centrifuge that operates continuously with a microfiltration system through recirculation in a single process tank. Sydor Optics has improved ceria recovery from 23% to 48%, saving thousands of dollars annually. Introduction Founded in 1964, Stefan Sydor Optics (Sydor) is a flat optics manufacturer located in Rochester, NY. Optical lenses are fabricated using a variety of techniques, some of which involve the use of a rare earth compound, cerium (IV) oxide (ceria). Cerium oxide powders are used to polish a variety of glass surfaces such as those found on lenses, laser optics, light filters, mirrors, hard disk drives, photomasks, cell phone displays, and semiconductors. It is important to the military and defense industries in the development of new products. The products are used on windows for air, land, and sea vehicles, as well as night vison scopes and eye wear. Other industries where cerium oxide is used are life sciences, entertainment, high-powered laser systems, and telecommunications. Per marketing studies conducted by a ceria reclamation facility, Flint Creek Resources, the annual worldwide consumption of cerium oxide powders in 2011 was estimated to be 9,440 metric tons. As ceria slurry is recirculated in the polishing machines at Sydor, contaminants build in concentration, eventually requiring the removal of the slurry from the machines. Due to the high cost of ceria, spent slurry solution is sent to Flint Creek Resources, where ceria is recovered for reuse at a lower purchase cost for Sydor via a patented process [1]. The particle size distribution as analyzed by Flint Creek of new ceria and reclaimed ceria are shown in Fig. 1. The tightly controlled reclamation process provides for a more uniform particle solids make-up around 1 μm. After the ceria process solution is dumped, the polishing machines are washed with water to remove residual slurry. Until 2017, the wastewater containing spent slurry from the machine washdowns was collected and filtered to remove large contaminants, centrifuged in a small, single-pass, basket centrifuge operating at 750 G's and 2.0 L/min, and the remaining liquid discharged to sewer (Fig. 2). Sydor realized that not all of the ceria was being recovered and a significant amount was being lost, worth approximately $15,000-$20,000/yr, also contributing to the level of total suspended solids (TSS) in the wastewater (~1,000 mg/l for the entire facility). While the sewer discharge was in compliance with local authorities, the company acknowledged that reducing solids loading to the sewer would not only benefit the environment but also avoid potential future surcharges. In 2015, the New York State Pollution Prevention Institute (NYSP2I) began working with Sydor to focus on the recovery of the spent ceria material and reduction of solids loading into the sewer system. This paper describes the work performed to evaluate technologies, design, and implement a system to recover more ceria and reduce TSS discharge from the ceria washwater operation. Materials and methods For the process flows described in Fig. 2, mass balances were performed to understand water use, wastewater generated, ceria use, and ceria losses. Data was collected using water meters and company records of ceria use which involved taking monthly data and dividing by 20 working days to obtain daily averages. Process ceria consisted of two types: new ceria and reclaimed ceria. All spent ceria that could be collected was sent to Flint Creek Resources for reclamation. Ceria losses could be attributed to unrecovered solids in the centrifuge discharge and operational losses such as spills and dragout. After implementation, the same mass balance analysis was performed to determine the extent of improvement. Total solids in sludge and water samples were measured by taking the average of three separate aliquots for each sample. Particle size analysis of the waste washwater was performed using Zetasizer light scattering technology. To observe the effect of G-force on separation efficiency, a variable speed lab centrifuge was used on samples taken from the holding tank. To determine membrane flux performance, batch microfiltration tests were conducted using a lab-scale system assembled with a single 12.7 Based on data collected in the lab, a production-scale system was designed that included use of a high-speed centrifuge and multi-module microfiltration system. After implementation of the larger system, the system was monitored for membrane flux performance. Final mass balance calculations were performed to quantify improvements in ceria recovery. Baseline determination In order to determine how much ceria was lost prior to 2017, a mass balance was performed only on water and ceria for the operation. Company records and sample analyses were used to generate values. Based on the total amount of new and reclaimed ceria used (31 kg/day), calculations indicate that 23.8 kg/day of ceria was lost to sewer discharge, dragout, and spills. 77% of process ceria was lost, leaving 23% of total ceria used recoverable for reclamation with the old system. In order to help determine potential separation technologies, particle size analysis was performed on grab samples from the holding tank shown in Fig. 2, and the average size distribution can be seen in Fig. 3. Most solid particles were observed to be in the 1 μm range, with no particles smaller than 0.1 μm detected. The majority of the sub-micron particles can be attributed to glass fines from the polished lenses [2]. Evaluation of improvement opportunities through pilot-testing While Sydor was able to recover 23% of the ceria used, opportunities to increase recovery rates were tenable. Chemical reclamation of ceria through acid leaching or flocculation, while proven to be capable of achieving up to 94% recovery of ceria [3,4], does not align with pollution prevention goals to avoid use of toxic/hazardous process chemicals. In order to ensure as much recovery of ceria as possible while minimizing environmental burden, mechanical separation technologies were prioritized. All of Sydor's abrasives were ceria formed at high temperature. During normal operations, the ceria is added to water and forms cerium (IV) hydroxide which is insoluble in the abrasive solutions as operations fall well within a pH range of 3-12 [5]. Process temperatures stay well below 60 C which keeps the insolubility of the cerium (IV) hydroxide relatively stable as well. As particulate was not expected to be dissolved in solution, gravimetric and filtration technologies were investigated for potential applicability. Centrifugation Qualitative observations were made from bench-top tests and the results indicated that higher G forces produced less opaque supernatant, with a threshold above 1000 G's. Hence, the conclusion was made that a centrifuge operating under higher G-forces should separate and recover more ceria. Membrane filtration Membranes are a form of mechanical filtration that can separate constituents ranging from micron size microfiltration down to molecular level reverse osmosis [6]. Based on the information in Fig. 3, pilot studies on the mixing tank solution were conducted using a single 12.7 mm diameter, 0.05 μm microfiltration tube produced by Porex (TMF 1.05, PVDF [polyvinylidene difluoride] membrane/substrate, 0.07 m 2 active area). Tubular membranes are most suitable when filtering solutions with higher levels of suspended solids to minimize costly prefiltration requirements [6]. In order to maintain sufficient turbulence through the membrane tube and maximize the mass transfer coefficient, a minimum fluid velocity of 4 m/sec would be needed to help avoid fouling and maintain consistent flux [7]. Fluid velocity has been demonstrated to be a critical parameter in many applications, such as oil/water separation [8]. The testing design included varying feed flow rates and transmembrane pressure. The results can be seen in Fig. 4 where flux is provided in liter/m 2 /hr (LMH). The information obtained through the pilot studies was used to design the production-scale system (design flux of 250 LMH). Membrane cleaning was performed using 1% by volume hydrogen peroxide in water. Hydrogen peroxide was selected as the final chemistry after several trial cleaning cycles. The mechanism by which the membrane is cleaned is potentially related to the acidic environment that the hydrogen peroxide solution generates. In low pH solutions, cerium (IV) hydroxide has the potential to be reduced to ionic cerium (III) and water [9]. In addition to being the result of a potential chemical reaction, Ce 3þ can also dissolve in low pH solutions [5]. Therefore, the hydrogen peroxide may act as an effective chemical cleaner through redox reactions and dissolution of any cerium fouling the membranes. Production system design A production-scale process was designed that would improve recovery of ceria and reduce TSS loadings to the sewer. A high speed Microseparator centrifuge rated at 2100 Gs was procured. Eight, 25.4 mm diameter Porex modules (TMF 1.1) were used for the membrane system which has an effective area of 1.13 m 2 . The process schematic including centrifugation, microfiltration, and controls can be seen in Fig. 5. A single 1,500 L tank was used for both the centrifuge and microfiltration process to simplify the overall operation. Due to the tendency of cerium oxide particles to agglomerate and harden [10], the system had to be designed to prevent maintenance problems and equipment failure. As a result, both the centrifuge and microfiltration systems constantly recirculate fluid through the process tank while the mixer is also left on continuously in the process tank. Permeate from the system is used to flush the system every hour and each time the system is shut down. 3.4. Implementation The entire system including process tank was installed in 2017. Results from grab samples indicate that the steady-state solids content in the holding tank range from 0.2% to 0.5%. The high speed centrifuge is removing solids effectively and allowing the membrane system to operate under ideal conditions. Analysis indicated that average solids content of the new, higher G centrifuge sludge is 77.7% compared to 71.8% with the old centrifuge, confirming increased solids removal efficiency. Periodic cleaning of the membranes is required to maintain acceptable flux. A 1% by volume hydrogen peroxide solution is used to clean the system on a weekly basis. Between cleanings, permeate flux typically starts at 400 LMH and decreases to 140 LMH. The average flow equates to a flux which is close to the design flux of 250 LMH. The original set of membranes were still working after 2 years of use. In total, the system including the centrifuge, process tank, and all ancillary items cost~$30,000. Approximate savings based on increased ceria recovery rates are nominal and explained in more detail later. Metrics The post-implementation mass balance for ceria and water can be seen in Fig. 6. When compared to pre-implementation a higher percent of ceria is now being reclaimed. It should be noted that Sydor modified polishing operations to use less ceria plant-wide during the implementation phase. Prior to the installation of the upgraded recovery system, Sydor was recovering 23% of their ceria. After implementation, approximately 18 kg/day of dry ceria powder was used leading to an updated percent recovery of 48%. More than double the amount of ceria was recovered for reclamation due to the efforts of the project. Recovery of ceria should be much higher than observed since all washwater should be contained within the process tank and only clean water discharged to sewer. Two reasons for the lower than expected percent recovery include unavoidable operational losses like dragout and spills and less than ideal equipment cleaning procedures which result in bypass of the recovery system. Sydor is aware of the need to improve cleaning protocol and as part of a continuous improvement program, the company is adopting more stringent practices that include better housekeeping and fluid handling. While more ceria is being recovered, the TSS levels of total facility water entering the sewer still remains largely unchanged due to the aforementioned housekeeping issues and presence of other non-ceria polishing operations in the facility that are discharging solids-laden water. Sydor plans to evaluate wastewater management approaches for the other operations, also as part of its continuous improvement program. Based on current recovery rates, the company is saving approximately $8,000/year. If maximum recovery is achieved (up to 95%), approximately $15,000/year in savings would be realized. Capital investment for this project was approximately $30,000 while supplemental engineering design and implementation assistance was provided by NYSP2I. For another company that is seeking to implement the same equipment, capital costs would be approximately $60,000. From a return on investment perspective using Net Present Value calculations with a 2% inflation rate, a company would expect to recover the $60,000 investment costs after 4 years. Additional benefits of reducing solids loading to sewer would also be realized and are not accounted for in this economic analysis. Conclusion As part of a business technical assistance project sponsored by the NY State Pollution Prevention Institute and in collaboration with Stefan Sydor Optics and Flint Creek Resources, an innovative process to increase recovery of valuable cerium oxide was developed and implemented at an optics manufacturer. Through pilot testing of centrifugation and microfiltration, a sustainable approach was validated that will allow optics manufacturers to recover more ceria and discharge cleaner water.
3,105.2
2020-02-24T00:00:00.000
[ "Environmental Science", "Materials Science" ]
Quench dynamics in the Jaynes-Cummings-Hubbard and Dicke models Both the Jaynes-Cummings-Hubbard (JCH) and Dicke models can be thought of as idealised models of a quantum battery. In this paper we numerically investigate the charging properties of both of these models. The two models differ in how the two-level systems are contained in cavities. In the Dicke model, the N two-level systems are contained in a single cavity, while in the JCH model the two-level systems each have their own cavity and are able to pass photons between them. In each of these models we consider a scenario where the two-level systems start in the ground state and the coupling parameter between the photon and the two-level systems is quenched. Each of these models display a maximum charging power that scales with the size of the battery N and no super charging was found. Charging power also scales with the square root of the average number of photons per two-level system m for both models. Finally, in the JCH model, the power was found to charge inversely with the photon-cavity coupling κ. I. INTRODUCTION Energy storage capabilities and efficiency by electrochemical batteries have rapidly improved in recent times, pushed by the need to robustly deal with the ever increasing energy demands of daily life.As we advance technologically in the search for faster charging batteries, recently the idea of a quantum battery has become a more heavily researched topic .The goal underpinning the exploration of a battery made of single quantum bits each with a single excited state is to use quantum phenomena to engineer a greatly improved energy storage device.Some limiting factors for classical electrochemical batteries are their thermodynamic energy loss due to heat and their increasing charging times for scaled up batteries [25][26][27][28].Investigating ways that a quantum battery can deal with these issues has lead to the desire to understand how quantum states might be utilised to produce a battery with minimal energy loss and how the system can be built to minimise its charging time [29][30][31][32][33][34][35][36][37][38][39][40]. Previous theoretical work [29] found that quantum batteries can display a super-charging characteristic.They found that as the number of two-level systems (N ) in the battery increased, the speed with which the battery charged increased at a rate of N √ N .This result has ignited significant interest in quantum batteries and inspired us to explore quantum batteries in the context of the Dicke model [41] and the Jaynes-Cummings-Hubbard (JCH) model [42]. Functionally, a quantum battery can be thought of as idealised two-level system inside a cavity whose mode is able to excite the two-level system.For such a system the battery can be thought of as being charged (uncharged) when the two-level system is in the excited (ground) state.Figure 1 schematically describes the two systems we will consider in this work.Specifically the JCH model, Fig. 1(a) and the Dicke model, Fig. 1(b), under the charging protocol shown in Fig 1(c).In each *<EMAIL_ADDRESS>we sonsider a scenario where we have N elements in the quantum battery.The system is initialised such that the two-level systems are in the ground state.At t = 0 the coupling between the two-level systems and the photons is quenched from 0 to β.We will first consider the charging in the JCH model in sections II & III.For the JCH system we find that the maximum charging power, P max , is proportional to the number of the cavities in the JCH system.Additionally, we find that the maximum charging power is (inversely) proportional to square root of the number photons initially in each cavity (the photon coupling between individual cavities).The result that the maximum charging power is proportional to the number of two-level systems in the JCH model then prompts us to revisit, in sections IV & V, results for the Dicke model, where we construct the Dicke Hamiltonian to ensure that the thermodynamic limit is bounded.For such a regime we regain a scaling for P max proportional to the number of two-level systems in the Dicke cavity.x Z b g R 2 I 0 V 0 j A Q 2 A m m d w u / 8 4 R K 8 0 g + m F m M f k j H k o 8 4 o 8 Z K z c q g W H L L 7 h J k k 3 g Z K U G G x q D 4 1 R 9 G L A l R G i a o 1 j 3 P j Y 2 f U m U 4 E z g v 9 B O N M W V T O s a e p Z K G q P 1 0 e e i c X F l l S E a R s i U N W a q / J 1 I a a j 0 L A 9 s Z U j P R 6 9 5 C / M / r J W Z U 8 1 M u 4 8 S g Z K t F o 0 Q Q E 5 H F 1 2 T I F T I j Z p Z Q p r i 9 l b A J V Z Q Z m 0 3 B h u C t v 7 x J 2 p W y d 1 O u N q u l e i 2 L I w 8 X c A n X 4 M E t 1 O E e G t A C B g j P 8 A p v z q P z 4 r w 7 H 6 v W n J P N n M M f O J 8 / f A 2 M t Q = = < / l a t e x i t > 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " N k z i g U / w h P B n 9 r A j W J l S x p 3 r v 6 e y I j Q e i J C 2 y m I G e l l b y b + 5 3 V T E 1 0 H G Z N J a l D S x a I o 5 a 6 J 3 d n 3 7 o A p p I Z P L C F U M X u r S 0 d E E W p s R i U b g r / 8 8 i p p X V T 9 y 2 r t r l a p e 3 k c R T i B U z g H H 6 6 g D r f Q g C Z Q E P A M r / D m K O f F e X c + F q 0 F J 5 8 5 h j 9 w P n 8 A B 5 m Q g w = = < / l a t e x i t > |gi < l a t e x i t s h a 1 _ b a s e 6 4 = " c v + 7 V E B 0 j N O u p B J q r w 1 The JCH model can be thought of as representing an atom with a single excited state in the presence of n photons inside a cavity.The two-level atomic system is coupled to the photons in the cavity via β, and the photons with frequency ω c are coupled between the N identical cavities via κ.Specifically the JCH Hamiltonian [43] is ( = 1) where ω a is the energy of separation between the energy levels of the TLS, a † and a are the photonic raising and lowering operators, and σ + and σ are the spin raising and lowering operators. Diagonalising the JCH Hamiltonian allows the Time Dependent Schrodinger Equation (TISE) to be solved and the dynamics analysed.Starting with the system in the lowest energy eigenstate, the atom-photon coupling is quenched from β = 0 to β > 0 at time t = 0.In doing so, the two-level systems are taken from a parameter space where they cannot charge, and instantaneously quenched to one where they are able to begin charging.In order to quantify the charging rate we define that the energy of the system is the difference between the energy of the time varying energy and that of the initial state, where the energy operator for the atomic spin is With the time varying energy we find the maximum charging power of the battery by taking the maximum rate of change of the energy with respect to time, which has a charging time to reach P max of τ .This definition of power has been used to make a direct comparison with with existing literature [29].Alternatively, the time to charge the battery to its maximum energy was explored, with both methods returning results with the same scaling factors.The two ways to analyse the power of the quantum battery are to consider how long it takes to fully charge the battery, which has a strong analogous relationship between classical batteries, or to consider the best possible charging power and consider how that scales.In the rest of this paper we will use the later definition, as in equation ( 4). The limit κ = 0 represents the case where individual elements of the cavities are not coupled to each other, and there is no photon transfer between them.We will use this as the baseline by which we analyse how different parameters may change the charging rate of the battery, with express interest in whether increasing the size of the battery improves the charging power.With a system of isolated (κ = 0) JCH two-level systems, the behaviour reduces to that of individual Rabi two-level systems with Hamiltonian, where ∆ = ω a − ω c and the average number of photons per two-level system is m.In this regime the JCH model can be solved analytically and has its first maximum energy at time where the Rabi frequency is It can be seen from equation ( 6) that when the energy separation between the two energy levels and the photon mode energy is zero (∆ = 0), the charging time will scale with the number of photons according to τ ∝ 1/ √ m, and E scales proportional to N .It follows that P max ∝ N and P max ∝ √ m.It is therefore of interest to explore how this relationship changes when the two-level systems are able to interact.Allowing the cavities in the quantum battery to interact via photon coupling (κ > 0) makes it possible to analyse how κ, N and m effect it's charging power. III. JCH RESULTS In this paper we present results in natural units where = 1, and for a resonant regime where the dimensionless photon mode energy and the dimensionless atomic energy separation are both 1, and hence ∆ = 0.In Fig. 2, the effect of increasing battery size is shown for different values of the photon mode coupling parameter κ.When κ = 0, the JCH model has an analytical solution.The present simulation results overlap exactly with the analytical results obtained from the Rabi matrix of equation (6).This serves as the starting point for the comparison of the power for larger JCH systems.It can be seen in this figure, that the charging power of the quantum battery for any value of κ never exceeds that of the completely uncoupled κ = 0 case.The quantum battery has the largest maximal charging power when it acts as if it was N independent single atom batteries.With the initial state taken as the lowest energy eigenstate, increasing κ moves the state to higher energy eigenstates faster.This appears to decrease the charging power of the quantum battery.The maximum charging power of FIG. 2. Charging power of the JCH quantum battery.Power is scaled with a factor of 1/N and plotted as a function N .Each line corresponds to a decreasing value of κ, with the uncoupled cavities having the largest values for the scaled power.Here β = 0.05 so that κ varies from below its energy scale to larger by an order of magnitude.The average number of photons per cavity is set to m = 1.The initial state is set as the state in which each cavity has m photons and is in the ground state. the JCH quantum battery was scaled by a factor of 1/N and for each value of κ, the data tends towards a constant.This strongly implies that P max ∝ N , the result obtained for uncoupled (κ = 0) JCH two-level systems.There is a notable difference between the P max of odd and even numbers of cavities, as κ increases to values larger than β.The data points show this alternating behaviour for κ = 0.5, but it can be seen that this has no effect on the large N behaviour of the quantum battery. Delving into the JCH quantum battery further, looking for other methods of improving their power scaling, Fig. 3 highlights the effect that increasing the average number of photons per cavity has on P max , scaled by 1 √ m .In Fig. 3 (upper), there are 2 cavities with a varying number of photons m, at t = 0, and it can be seen that the scaled power tends towards a constant as m increases, strongly implying that the power scales as This relationship can also be seen in with 4 cavities (Fig. 3 (lower)).The same relationship is observed for up to 6 cavities, with the limiting factor being that as the number of cavities increases the size of the Hilbert space quickly makes the diagonalisation of the Hamiltonian computationally intensive.As a result, it can be seen for N = 2 there are solutions for relatively large m, while m has to be limited for N > 2. Another factor in the power scaling of the JCH quantum battery is the strength of the photon coupling between adjacent cavities, κ. Figure 4 demonstrates that κ has an inverse scaling relationship with the maximum power, by showing that the power, scaled by κ becomes a constant for as κ increases, highlighting that, Most interesting about this finding, is that the closest relationship to be drawn between the JCH and Dicke models are for high values of κ.This is due to the Dicke model considering each two-level system as indistinguishable from one another and are all sharing the same photons inside the one cavity.One would expect that for values of κ where it is the dominant effect in the system, where β = 0.05 and κ > 0.5 the type of super scaling that was seen in [29] would start to show an effect.In this paper we consider photon coupling between nearest neighbours in a line configuration.As an aside, the hopping of photons between any other cavity (hyper hopping) in the system is also explored for mathematical interest as well as closer comparison to the Dicke model, where each of the two-level systems are indistinguishable in location.It was found that in both nearest neighbour hopping and hyper-hopping systems, the scaling factor for the charging power as a function of κ, N and m were consistent. It is clear that there is a disparity between the present results for the JCH quantum battery and that what was FIG. 5.The power scaling of a Dicke quantum battery for an increasing battery size.The number of two-level systems ranges from 2 to 20, while the photon-atom coupling parameter β varies from low coupling to where it dominates the dynamics of the system (0 to 2).Asymptotic behaviour is seen for the scaled power towards a constant value for increasing N . V. RESULTS: DICKE MODEL Initially importance was placed on the model being able to replicate the previous results of [29], which was possible by using the Hamiltonian referenced in their paper, without the factor of 1/ √ N .However, when using the Dicke Hamiltonian, of equation (10), with the 1/ √ N term in the photon to two-level system coupling terms, the super-charging is not present.This strong agreement between the JCH and the Dicke models, confirms that while the charging power of quantum batteries does increase as the size of the battery increases, it does so by a factor of N , i.e.P max ∝ N, as demonstrated in Fig. 5. Here, β takes on the same values (β = 0, 0.05, 0.5, 2).While the starting number of photons in the system is taken to be N , the Dicke model allows for behaviour that does not conserve the particle number.As a result, to compute the Dicke model limitations on the maximum number of considered photons needs to be placed.In this work we considered systems of a range of photons from 1 to 5N for each data point.5N was taken to be the maximum because good convergence was already found for 4N . Exploring the effect that the average number of photons per two-level system has on P max , Fig. 6 displays that P max / √ m converges to a constant value as m increased.This result is the same result which was found in section III, for the JCH battery.The strong agreement between the Dicke and JCH models for a quantum battery is interesting, because of the different ways each model allows the two-level systems to interact with the photon fields.In the Dicke model all of the two-level systems are able to interact with all of the photons at all times, because they exist within the same cavity, while in the JCH model the two-level systems can only interact with the photons in there cavity.This result implies that for large enough m the power returns scale the same way whether you localise the two-level systems or not, without ever considering the strength of the entanglement of states. As m gets very large we enter into a regime currently access able by experiments, eg.[1], and when looking at a regime of m = 200 we found the same scaling relation, implying that the charging power of the Dicke quantum battery is only limited by the number of photons input into the cavity. VI. CONCLUSIONS In this paper we have considered the charging quench dynamics of both the JCH and Dicke models.For the JCH model we have found that P max scales linearly with N , i.e. there is no quantum advantage in such a system.More generally we also find that as the coupling between the cavities κ, in the JCH model, is increased P max is reduced.However, there is an increase in P max when the number of photons, m, in each cavity, at t = 0, is increased.In this case P max ∝ √ m. This investigation into the JCH system lead us to revisit the charging quench dynamics of the Dicke model.Starting from a form of the Dicke Hamiltonian which ensures a consistent thermodynamic limit we fiund again that P max scales linearly with the number of the two-level systems in the Dicke cavity.Additionally, we recovered a scaling of P max ∝ √ m, where m is the number of photons per two-level system, at t = 0, in the Dicke cavity. < l a t e x i t s h a 1 _ b a s e 6 4 = " r 6 o l B F Y 4 n I r D J 2 g U 4 S V 9 g o C W p B M = " > AA A B 6 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E t M e C F 4 8 t 2 A 9 o Q 9 l s J + 3 a z S b s b o Q S + g u 8 e F D E q z / J m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 I B F c G 9 f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e F R + f i k r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o L J 3 d z v P K H S P J Y P Z p q g H 9 G R 5 C F n 1 F i p 6 Q 3 K F b f q L k D W i Z e T C u R o D M p f / W H M 0 g i l Y Y J q 3 f P c x P g Z V Y Y z g b N S P 9 W Y U D a h I + x Z K m m E 2 s 8 W h 8 7 I h V W G J I y V L W n I Q v 0 9 k d F I 6 2 k U 2 M 6 I m r F e 9 e b i f 1 4 v N W H N z 7 h M U o O S L R e F q S A m J v O v y Z A r Z E Z M L a F M c X s r Y W O q K D M 2 m 5 I N w V t 9 e Z 2 0 r 6 r e T f W 6 e V 2 p 1 / I 4 i n A G 5 3 A J H t x C H e 6 h A S 1 g g P A M r / D m P D o v z r v z s W w t O P n M K f y B 8 / k D e o m M t A = = < / l a t e x i t > 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " k g 0 F F V 3 V r j O J f F b f b w L e 9 s 1 D z Q c = " > A A A B 6 H i c b V D L T g J B E O z F F + I L 9 e h l I j H x R H Y J U Y 4 k X j x C I o 8 E N m R 2 a G B k d n Y z M 2 t C N n y B Fw 8 a 4 9 V P 8 u b f O M A e F K y k k 0 p V d 7 q 7 g l h w b V z 3 2 8 l t b e / s 7 u X 3 C w e H R 8 c n x d O z t o 4 S x b D F I h G p b k A 1 C i 6 w H Y w v p 3 7 7 S d U m s f y w U w S 9 C M 6 l D z k j B o r N e 7 7 x Z J b d h c g 6 8 T L S A k y 1 P v F r 9 4 g Z m m E 0 j B B t e 5 6 b m 3 k l b 9 6 j 9 + K 9 e x + L 1 o K X z x y T P / A + f w D / y 4 / m < / l a t e x i t > !a < l a t e x i t s h a 1 _ b a s e 6 4 = " L 3 u r H I r 3 7 v s 2 2 O 1 8 z e I E D J p 3 r v 6 e y I j Q e i J C 2 y m I G e l l b y b + 5 3 V T E 1 0 H G Z N J a l D S x a I o 5 a 6 J 3 d n 3 7 o A p p IZ P L C F U M X u r S 0 d E E W p s R i U b g r / 8 8 i p p X V T 9 y 2 r t r l a p e 3 k c R T i B U z g H H 6 6 g D r f Q g C Z Q E P A M r / D m K O f F e X c+ F q 0 F J 5 8 5 h j 9 w P n 8 A B 5 m Q g w = = < / l a t e x i t > |gi < l a t e x i t s h a 1 _ b a s e 6 4 = " 3 < X e n E f n x X l 3 P p a t O S e b O Y U / c D 5 / A H 2 R j L Y = < / l a t e x i t > l a t e x i t s h a 1 _ b a s e 6 4 = " k d x h D Q X D 7 6 O h 6 U 8 Q 8 8 8 f a I b 3 l l E = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K 0 B w D X j x J A u Y B y R J m J 7 3 J m N n Z Z W Z W C C F f 4 M W D I l 7 9 J G / + j Z N k D 5 p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u Y 3 N r e y e / W 9 j b P z g 8 K h 6 f t H S c K o Z N F o t Y d Q K q U X C J T c O N w E 6 i k E a Bw H Y w v p 3 7 7 S d U m s f y w U w S 9 C M 6 l D z k j B o r N e 7 7 x Z J b d h c g 6 8 T L S A k y 1 P v F r 9 4 g Z m m E 0 j B B t e 5 6 b m FIG. 1 . FIG. 1.(a) Schematic for the JCH model.N identical twolevel systems each occupying their own cavity, with photons coupling between cavities with strength κ.(b) Schematic for the Dicke model.Two-level systems the same as above except that they are all in the one cavity.(c) Representation of the charging sequence of the quantum battery.Initially the photon coupling to the two-level system β is zero, then it is quenched to a value β > 0, where charging begins. FIG. 3 . FIG.3.Number of photons per two-level system at t = 0 vs the scaled power of the quantum battery for 2 (a) and 4 (b) two-level systems.The maximum power scaled with a factor of 1/ √ m to highlight that it tends towards a constant value as m increases, implying that the power scales with the √ m.Larger values of κ show decreases power but appear to require larger values of m from them to approach a constant value. FIG. 6 . FIG. 6.The maximum power as a function of average number of photons m, scaled by 1 √ m , for N = 10 two-level systems.The legend displays the corresponding values of β for each plot, with lines drawn between data points to help visualise the data.
6,771.8
2022-10-04T00:00:00.000
[ "Physics" ]
JeLLyFysh-Version1.0 -- a Python application for all-atom event-chain Monte Carlo We present JeLLyFysh-Version1.0, an open-source Python application for event-chain Monte Carlo (ECMC), an event-driven irreversible Markov-chain Monte Carlo algorithm for classical N-body simulations in statistical mechanics, biophysics and electrochemistry. The application's architecture closely mirrors the mathematical formulation of ECMC. Local potentials, long-ranged Coulomb interactions and multi-body bending potentials are covered, as well as bounding potentials and cell systems including the cell-veto algorithm. Configuration files illustrate a number of specific implementations for interacting atoms, dipoles, and water molecules. Introduction Event-chain Monte Carlo (ECMC) is an irreversible continuous-time Markovchain algorithm [5,28] that often equilibrates faster than its reversible counterparts [30,19,22,23,24]. ECMC has been successfully applied to the classic Nbody all-atom problem in statistical physics [4,17]. The algorithm implements the time evolution of a piecewise non-interacting, deterministic, system [6]. Each straight-line, non-interacting leg of this time evolution terminates in an event, defined through the event time at which it takes place and through the out-state, the updated starting configuration for the ensuing leg. An event is chosen as the earliest of a set of candidate events, each of which is sampled using information contained in a so-called factor. The entire trajectory samples the equilibrium probability distribution. . . ), each factor must at all times independently accept the continued non-interacting evolution, and must determine a candidate event time at which this is no longer the case. The earliest candidate event time (which determines the veto) and its out-state yield the next event (the event E b is triggered by a 2 ). In JF-V1.0, after committing an event to the global state, candidate events with certain tags are trashed (tags t 1, t 3 at E b ) or maintained active (tags t 2, t 4 at E b ), and others are newly activated. JF introduces non-confirmed events and also pseudo-factors, which complement the factors of ECMC, and which may also trigger events. ECMC departs from virtually all Monte Carlo methods in that it does not evaluate the equilibrium probability density (or its ratios). In statistical physics, ECMC thus computes neither the total potential (or its changes) nor the total force on individual point masses. Rather, the decision to continue on the current leg of the non-interacting time evolution builds on a consensus which is established through the factorized Metropolis algorithm [28]. A veto puts an end to the consensus, triggers the event, and terminates the leg (see Fig. 1). In the continuous problems for which ECMC has been conceived, the veto is caused by a single factor. The resulting event-driven ECMC algorithm is reminiscent of molecular dynamics, and in particular of event-driven molecular dynamics [1,2,3], in that there are velocity vectors (which appear as lifting variables). These velocities do not correspond to the physical (Newtonian) dynamics of the system. ECMC differs from molecular dynamics in three respects: First, ECMC is event-driven, and it remains approximation-free, for any interaction potential [32], whereas event-driven molecular dynamics is restricted to hard-sphere or piecewise constant potentials. (Interaction potentials in biophysical simulation codes have been coarsely discretized [8] in order to fit into the event-driven framework [36,33,34].) Second, in ECMC, most point masses are at rest at any time, whereas in molecular dynamics, all point masses typically have nonzero velocities. In ECMC, an arbitrary fixed number of (independently) active point masses (with non-zero velocities) and identical velocity vectors for all of them may be chosen. In JF-V1.0, as in most previous applications of ECMC, only a single independent point mass is active. The ECMC dynamics is thus very simple, yet it mixes and relaxes at a rate at least as fast as in molecular dynamics [19,23,24]. Third, ECMC by construction exactly samples the Boltzmann (canonical) distribution, whereas molecular dynamics is in principle micro-canonical, that is, energy-conserving. Molecular dynamics is thus generally coupled to a thermostat in order to yield the Boltzmann distribution. The thermostat there also eliminates drift in physical observables due to integration errors. ECMC is free from truncation and discretization errors. ECMC samples the equilibrium Boltzmann distribution without being itself in equilibrium, as it violates the detailed-balance condition. Remarkably, it establishes the aforementioned consensus and proceeds from one event to the next with O(1) computational effort even for long-range potentials, as was demonstrated for soft-sphere models, the Coulomb plasma [18,19], and for the simple point-charge with flexible water molecules (SPC/Fw) model [39,9]. JeLLyFysh (JF) is a general-purpose Python application that implements ECMC for a wide range of physical systems, from point masses interacting with central potentials to composite point objects such as finite-size dipoles, water molecules, and eventually peptides and polymers. The application's architecture closely mirrors the mathematical formulation that was presented previously (see [9,Sect II]). The application can run on virtually any computer, but it also allows for multiprocessing and, in the future, for parallel implementations. It is being developed as an open-source project on GitHub. Source code may be forked, modified, and then merged back into the project (see Section 6 for access information and licence issues). Contributions to the application are encouraged. The present paper introduces the general architecture and the key features of JF. It accompanies the first public release of the application, JeL-LyFysh-Version1.0 (JF-V1.0). JF-V1.0 implements ECMC for homogeneous, translation-invariant N -body systems in a regularly shaped periodic simulation box and with interactions that can be long-ranged. In addition, the present paper presents a cookbook that illustrates the application for simplified core examples that can be run from configuration files and validated against published data [9]. A full-scale simulation benchmark against the Lammps application is published elsewhere [35]. The JF application presented in this paper is intended to grow into a basis code that will foster the development of irreversible Markov-chain algorithms and will apply to a wide range of computational problems, from statistical physics to field theory [13]. It may prove useful in domains that have traditionally been reserved to molecular dynamics, and in particular in the all-atom Coulomb problem in biophysics and electrochemistry. The content of the present paper is as follows: The remainder of Section 1 discusses the general setting of JF as it implements ECMC. Section 2 describes its mediator-based architecture [10]. Section 3 discusses how the eponymous events of ECMC are determined in the event handlers of JF. Section 4 presents system definitions and tools, such as the user interface realized through configuration files, the simulation box, the cell systems, and the interaction potentials. Section 5, the cookbook, discusses a number of worked-out examples for previously presented systems of atoms, dipoles or water molecules with Coulomb interactions [9]. Section 6 discusses licence issues, code availability and code specifications. Section 7 presents an outlook on essential challenges and a preview of future releases of the application. Configurations, factors, pseudo-factors, events, event handlers In ECMC, configurations c = {s 1 , . . . , s i , . . . , s N } are described by continuous time-dependent variables where s i (t) represents the position of the ith of N point masses (although it may also stand for the continuous angle of a spin on a lattice [30]). JF is an event-driven implementation of ECMC, and it treats point masses and certain collective variables (such as the barycenter of a composite point object) on an equal footing. Rather than the time-dependent variables s i (t), its fundamental particles (Particle objects) are individually time-sliced positions (of the point masses or composite point objects). Non-zero velocities and time stamps are also recorded, when applicable. The full information can be packed into units (Unit objects), that are moved around the application (see Section 1.2). Each configuration c has a total potential U ({s 1 , . . . , s N }), and its equilibrium probability density π is given by the Boltzmann weight that is sampled by ECMC (see [9]). The total potential U is decomposed as and the Boltzmann weight of eq. (1) is written as a product over terms that depend on factors M , with their corresponding factor potentials U M . A factor M = (I M , T M ) consists of an index set I M and of a factor type T M , and M is the set of factors that have a non-zero contribution to eq. (2) for some configuration c. In the SPC/Fw water model, for example, one factor M with factor type T M = Coulomb might describe all the Coulomb potentials between two given water molecules, and the factor index set I M would contain the identifiers (indices) of the involved four hydrogens and two oxygens (see Section 5.3). ECMC relies on the factorized Metropolis algorithm [28], where the move from a configuration c to another one, c , is accepted with probability . Rather than to evaluate the right-hand side of eq. (3), the product over the factors is interpreted as corresponding to a conjunction of independent Boolean random variables In this equation, X Fact (c → c ) is "True" (the proposed Monte Carlo move is accepted) if the independently sampled factorwise Booleans X M are all "True". Equivalently, the move c → c is accepted if it is independently accepted by all factors. This realizes the aforementioned consensus decision (see Fig. 1). For an infinitesimal displacement, the random variable X M of only a single factor M can be "False", and the factor M vetoes the consensus, creates an event, and starts a new leg. In this process, M requires only the knowledge of the factor in-state (based on the configuration c M , and the information on the move), and the factor out-state (based on c M ) provides all information on the evolution of the system after the event. The event is needed in order to enforce the global-balance condition (see Fig. 2a). In this process, lifting variables [7], corresponding to generalized velocities, allow one to repeat moves of the same type (same particle, same displacement), as long as they are accepted by consensus. 1 Physical and lifting variables build the overcomplete description of the Boltzmann distribution at the base of ECMC, and they correspond to the global physical and global lifting states of JF, its global state. JF, the computer application, is entirely formulated in terms of events, beyond the requirements of the implemented event-driven ECMC algorithm. The application relies on the concept of pseudo-factors, which complement the factors in eq. (2), but are independent of potentials and without incidence on the global-balance condition (see Fig. 2b). In JF, the sampling of configuration space, for example, is expressed through events triggered by pseudo-factors. Pseudo-factors also trigger events that interrupt one continuous motion (one "event chain" [5]) and start a new one. Even the start and the end of each run of the application are formulated as events triggered by pseudo-factors. In ECMC, among all factors M in eq. (2), only those for which U M changes along one leg can trigger events. In JF, these factors are identified in a separate element of the application, the activator (see Section 2.4), and they are realized in yet other elements, the event handlers. An event handlers may require an in-state. It then computes the candidate event time and its out-state (from the in-state, from the factor potential, and from random elements). The complex operation of the activator and the event handlers is organized in JF-V1.0 with the help of a tag activator, with tags essentially providing finer distinction than the factor types T M . A tagger identifies a certain pool of factors, and also singles out factors that are to be activated for each tag. The triggering of an event associated with a given tag entails the trashing of candidate events with certain tags, while other candidate events are maintained (see Fig. 1). Also, new candidate events have to be computed by event handlers with given tags. This entire process is managed by the tag activator. Global state, internal state In the event-driven formulation of ECMC, a point mass with identifier σ and with zero velocity is simply represented through its position, while an active point mass (with non-zero velocity) is represented through a time-sliced position s σ (t σ ), a time stamp σ(t σ ) and a velocity v σ : An active point mass thus requires storing of a velocity v i and of a time stamp t σ , in addition to the time-sliced position s σ (t σ ). In JF, the global state traces all the information in eq. (5). It is broken up into the global physical state, for the time-sliced positions s σ , and the global lifting state, for the non-zero velocities v σ and the time stamps t σ . JF represents composite point objects as trees described by nodes. Leaf nodes correspond to the individual point masses. A tree's inner nodes may represent, for example, the barycenters of parts of a molecule, and the root node that of the entire molecule (see Fig. 3a-b). The velocities inside a composite point object are kept consistent, which means that the global lifting state includes non-zero velocities and time stamps of inner and root nodes. The storing element of the global state in JF is the state handler (see Section 2.3). The global state is not directly accessed by other elements of the application, but branches of the tree can be extracted (copied) temporarily, together with their unit information. Independent and induced units differentiate between those that appear in ECMC and those that are carried along in order to assure consistency (see Fig. 3). For internal computations, the global state may be supplemented by an internal state that is kept, not in the state handler, but in the activator part of the application (see Section 2.4). In JF-V1.0, the internal state consists in cell-occupancy systems, which associate identifiers of composite point objects or point masses to cells. (An identifier is a generalized particle index with, in the case of a tree, a number of elements that correspond to the level of the corresponding node.) In JF, cell-occupancy systems are used for book-keeping, : Tree representation, with leaf nodes for the individual atoms and higher-level nodes for barycenters. Nodes each have a particle (a Particle object) containing a position vector and charge values. A unit (a Unit object), associated with a node, copies out the particle's identifier and its complete global-state information. (c): Internal representation of composite point objects with separate cell systems for particle identifiers on different levels. On the leaf level, only one kind of particles is tracked. and also for cell-based bounding potentials. JF-V1.0 requires consistency between the time-sliced particle information and the units. This means that the time-sliced position s σ (t σ ) and the time-dependent position s σ (t) in eq. (5) belong to the same cell (see Fig. 2b). Several cell-occupancy systems may coexist within the internal state (possibly on different tree-levels and with different cell systems, see Fig. 3c and Section 5.3.4). ECMC requires time-slicing only for units whose velocities are modified. Beyond the consistency requirements, JF-V1.0 performs time-slicing also for unconfirmed events, that is, for triggered events for which, after all, the out-state continues the straight-line motion of the in-state (see Section 3.1.2). Lifting schemes In its lifted representation of the Boltzmann distribution, ECMC introduces velocities for which there are many choices, that is, lifting schemes. The number of independent active units can in particular be set to any value n ac > 1 and then held fixed throughout a given run. This generalizes easily from the known n ac = 1 case [12]. A simple n ac -conserving lifting scheme uses a factorderivative table (see [9,Fig. 2]), but confirms the active out-state unit only if the corresponding unit is not active in the in-state (its velocity is None). For |I M | > 3, the lifting scheme (the way of determining the out-state given the in-state) is not unique, and its choice influences the ECMC dynamics [9]. In JF-V1.0, different lifting-scheme classes are provided in the JF lifting package. They all construct independent-unit out-state velocities for independent units that equal the in-state velocities. This appears as the most natural choice in spatially homogeneous systems [5]. Multiprocessing In ECMC, factors are statistically independent. In JF, therefore, the event handlers that realize these factors can be run independently on a multiprocessor machine. With multiprocessor support enabled, candidate events are concurrently determined by event handlers on separate processes, using the Python multiprocessing module. Candidate event times are then first requested in parallel from active event handlers, and then the out-state for the selected event. Given a sufficient number of available processors, out-states may be computed for candidate events in advance, before they are requested (see Section 2.1). The event handlers themselves correspond to processes that usually last for the entire duration of one ECMC run. When not computing, event handlers are either in idle stage waiting to compute an candidate event time or in suspended stage waiting to compute an out-state. Using multiple processes instead of threads circumvents the Python global interpreter lock, but the incompressible time lag due to data exchange slows down the multiprocessor implementation of the mediator with respect to the single-processor implementation. Parallelization ECMC generalizes to more than one independent active unit, and a sequential, single-process ECMC computation remains trivially correct for arbitrary n ac (although JF-V1.0 only fully implements the n ac = 1 case). The relative independence of a small number of independent active units in a large system, for 1 n ac N , allows one to consider the simultaneous committing in different processes of n pr events to the shared global state. (A conflict arises if this disagrees with what would result by committing them in a single process.) If n pr N , conflicts between processes disappear (for short-range interacting systems) if nearby active units are treated in a single process (see Fig. 4a). The parallel implementation of ECMC, for short-range interactions, is conceptually much simpler than that of event-driven molecular dynamics [29,14,20], and it may well extend to long-range interacting system. An alternative type of parallel ECMC, domain decomposition into n ac stripes, was demonstrated for two-dimensional hard-spheres systems, and considerable speed-up was reached [16]. Here, stripes are oriented parallel to the velocities, with one active unit per stripe. Stripes are isolated from each other by immobile layers of spheres [16], which however cause rejections (or reversals of one or more components of the velocity). The stripe decomposition eliminates all scheduling conflicts. As any domain decomposition [29], it is restricted to physical models with short-range interactions. It is not implemented in JF-V1.0 (see Fig. 4b). 2. JF architecture JF adopts the design pattern based on a mediator [10], which serves as the central hub for the other elements that do not directly connect to each other. In this way, interfaces and data exchange are particularly simple. The mediator design maximizes modularity in view of future extensions of the application. The separation region (of width ∆) is wider than d, so that all conflict between stripes is avoided (see [16]). Mediator The mediator is doubled up into two modules (with SingleProcessMediator and MultiProcessMediator classes). The run method of either class is called by the executable run.py script of the application, and it loops over the legs of the continuous-time evolution. The loop is interrupted when an EndOfRun exception is raised, and a post run method is invoked. For the single-process mediator, all the other elements are instances of classes that provide public methods. In particular, the mediator interacts with event handlers. For the multi-process mediator, each event handler has its own autonomous iteration loop and runs in a separate process. It exchanges data with the mediator through a two-way pipe. Receiving ends on both sides detect when data is available using the pipe's recv methods. In JF-V1.0, the same event-handler classes are used for the single-process and multi-process mediator classes. The multi-process mediator achieves this through a monkey-patching technique. It dynamically adds a run in process method to each created instance of an event handler, which then runs as an autonomous iteration loop in a process and reacts to shared flags set by the mediator. The multi-process mediator in addition decorates the event handler's send event time and send out state methods so that output is not simply returned (as it is in the single-process mediator) but rather transmitted through a pipe. Only the mediator accesses the event handlers, and these re-definitions of methods and classes (which abolish the need for two versions for each eventhandler class) are certain not to produce undesired side effects. On one leg of the continuous-time evolution, the mediator goes through nine steps (see Fig. 5). In step 1, the active global state (that part of the global state that appears in the global lifting state) is obtained from the state handler. (In the tree state handler of JF-V1.0, branches of independent units are created for all identifiers that appear in the lifting state.) Knowing the preceding event handler (which initially is None) and the active global state, it then obtains from the activator, in step 2, the event handlers to activate together with their instate identifiers. For this, the activator may rely on its internal state, but not on the global state, to which it has no access. In step 3, the corresponding in-states are extracted (that is, copied) from the state handler. In step 4, candidate event times are requested from the appropriate event handlers and pushed into the scheduler's push event method. In step 5, the mediator obtains the earliest candidate event time from the scheduler's get succeeding event method and and asks its event handler for the event out-state (step 6) to be committed to the global state (step 7). The activator, in step 8, determines which candidate events are to be trashed (in JF-V1.0: based on their tags), that is, which candidate event times are to be eliminated from the scheduler. Also, the activator collects the corresponding event handlers, as they become available to determine new candidate events. In the optional final step 9, the mediator may connect (via the input-output handler) to an output handler, depending on the preceding event handler. A mediating method defines the arguments sent to the output handler (for example the extracted global state), and considerable computations may take place there. Figure 5: JF architecture, built on the mediator design pattern. The iteration loop takes the system from one event to the next (for example from Ea to E b in Fig. 1). All elements of JF interact with the mediator, but not with each other. The multi-process mediator interacts with event handlers running on separate processes, and exchanges data via pipes. The multi-process mediator uses a single pipe to receive the candidate event time and the out-state from an event handler. In order to distinguish the received object, the mediator assigns four different stages to the event handlers (idle, event time started, suspended, out state started stages). The assigned stage determines which flags can be set to start the send event time or send out state methods. It also determines the nature of the data contained in the pipe. In the idle stage, the mediator can set the starting flag after which the event handler will wait to receive the in-state through the pipe. This starts the event time started stage during which the event handler determines the next candidate event time and places it into the pipe. After the mediator has recovered the data from the pipe, it places the event handler into the suspended stage. If requested (by flags), the event handler can then either compute the out-state (out state started stage), or else revert to the event time started stage. The strategy for suspending an event handler or for having it start an outstate computation (before the request) can be adjusted to the availability of physical processors on the multi-processor machine. However, in JF-V1.0, the communication via pipes presents a computational bottleneck. Figure 6: Basic stages of event handlers for factors and pseudo-factors (stages 1 and 3 relevant for the multi-process mediator only). In the idle and suspended stages, the event handler is halted (via flags controlled by the multi-process mediator), thus liberating resources for other candidate-event-time requests. With the multi-process mediator, candidate out-states may be computed before the out-state request arrives. Event handlers Event handlers (instances of a number of classes that inherit from the abstract EventHandler class) provide the send event time and send out state methods that return candidate events. These candidate events either become events of a factor or pseudo-factor or they will be be trashed. 2 When realizing a factor or a pseudo-factor, event handlers receive the instate as an argument of the send event time method. The send out state method then takes no argument. In contrast, event handlers that realize a set of factors or pseudo-factors request candidate event times without first specifying the complete in-state, because the element of the set that triggers the event is yet unknown at the event-time request (see Section 3.2.2 for examples of event handlers that realize sets of factors). The send event time method then takes the part of the in-state which is necessary to calculate the candidate event time. Also, it may return supplementary arguments together with the candidate event time, which is used by the mediator to construct the full in-state. The in-state is then an argument of the send out state method, as it was not sent earlier. In JF-V1.0, each run requires a start-of-run event handler (an instance of a class that inherits from the abstract StartOfRunEventHandler class), and it cannot terminate properly without an end-of-run event handler. Section 3 discusses several event-handler classes that are provided. State handler The state handler (an instance of a class that inherits from the abstract StateHandler class) is the sole separate element of JF to access the global state. In JF-V1.0, the global physical state (all positions of point masses and composite point objects) is contained in an instance of the TreePhysicalState class represented as a tree consisting of nodes (each node corresponds to a Node object). Each node contains a particle (a Particle object) which holds a timesliced position. In JF-V1.0, each leaf node may in addition have charges as a Python dictionary mapping the name of the charge onto its value. Each tree is specified through its root node. Root nodes can be iterated over (in JF-V1.0, they are members of a list). Each node is connected to its parent and its children, which can also be iterated over. In JF-V1.0, the children are again members of a list. These lists imply unique identifiers of nodes and their particles as tuples. The first entry of the tuple gives a node's root node list index, followed by the indices on lower levels down to the node itself (see Fig. 3). The global lifting state is stored in JF-V1.0 in a Python dictionary mapping the implicit particle identifier onto its time stamp and its velocity vector. This information is contained in an instance of the TreeLiftingState class. Both the physical and lifting states are combined in the TreeStateHandler which implements all methods of a state handler. To communicate with other elements of the JF application (such as the event handlers and the activator) via the mediator, the state handler combines the information of the global physical and the global lifting state into units (that is, temporary Unit objects, see Fig. 7). A given physical-state and lifting-state information for a node in the state handler is mirrored (that is copied) to a unit containing its implicit identifier, position, charge, velocity and time stamp. All other elements can access, modify, and return units. This provides a common packaging format across JF. The explicit identifier of a unit allows the program to integrate changed units into the state handler's global state. In the tree state handler of JF-V1.0, the local tree structure of nodes can be extracted into a branch of cnodes, that is, nodes containing units. 3 Each event handler only requires the global state reduced to a single factor in order to determine candidate event times and out-states. As a design principle in JF-V1.0, the event handlers keep the time-slicing of composite point objects and its point masses consistent. Information sent to event handlers via the mediator is therefore structured as branches, that is the information of a node with its ancestors and descendants. The state handler's extract from global state method creates a branch for a given identifier of a particle by constructing a temporary copy of the immutable node structure of the state handler using cnodes. Out-states of events in the form of branches can be committed to the global state using the insert into global state method. The extract active global state method, the first of two additional methods provided by the state handler, extracts the part of the global state which appears in the global lifting state. The tree state handler constructs the minimal number of branches, where each node contains an active unit, so that all implicit identifiers appearing in the global lifting state are represented. The activator may then determine the factors which are to be activated. The method is also used to time-slice the entire global state (see Section 3.2.2). Second, the extract global state method extracts the full global state. (For the tree state handler of JF-V1.0, this corresponds to a branch for each root node.) This method does not copy the positions and velocities. In JF-V1.0, the global physical state is initialized via the input handler within the input-output handler (see Section 2.6). The initial lifting state, however, is set via the out-state of the start-of-run event handler, which is committed to the global state at the beginning of the program (see Section 3.2.2). This means that, in JF-V1.0, the lifting state cannot be initialized from a file. Activator The activator, a separate element of the JF application, is an instance of a class that inherits from the abstract Activator class. At the beginning of each leg, the activator provides to the mediator the new event handlers which are to be run, using the get event handlers to run method. (As required by the mediator design pattern, no data flows directly between the activator and the event handlers, although it initially obtains their references, and subsequently manages them.) The activator also returns associated in-state identifiers of particles within the global state. The extracted parts of the global state of these are needed by the event handlers to compute their candidate event time (the identifier may be None if no information is needed). Finally, it readies for the mediator a list of trashable candidate events at the end of each leg in the get trashable events method, once the mediator has committed the preceding event to the global state via the state handler. Figure 8: Tag activator, and its complex interaction with the mediator. It readies event handlers and in-state identifiers, provides internal-state information for an out-state request, and identifies the trashable candidate events, as a function of the preceding event. In JF-V1.0, the activator is an instance of the TagActivator class (that inherits from the Activator class). The tag activator's operations depend on the interdependence of tags of event handlers and their events. Event handlers receive their tag by instances of classes located in the activator and derived from the abstract Tagger class that are called "taggers". A tagger centralizes common operations for identically tagged event handlers (see Fig. 8). On initialization, the tagger receives its tag (a string-valued tag attribute) and an event handler (that is, a single instance), of which it creates as many identical event-handler copies as needed (using the Python deepcopy method). Each tagger provides a yield identifiers send event time method which generates in-state identifiers based on the branches containing independent active units (this means that the taggers are implemented especially for the TreeStateHandler, the TagActivator however is not restricted to this since it just transmits the extracted active global state). These in-states are passed (after extracting the part of the global state related to the identifiers from the state handler) to the send event time method of the tagger's event handlers. The number of event handlers inside a tagger should meet the maximum number of events with the given tag simultaneously in the scheduler. In this paper, event handlers (and their candidate events) are referred to by tags, although in JF they do not have the tag attribute of their taggers. On initialization, a tagger also receives a list of tags for event handlers that it creates, as well as a list of tags for event handlers that need to be trashed. The tag activator converts this information of all taggers into its internal create taggers and trash taggers dictionaries. Additionally, the tag activator creates an internal dictionary mapping from an event handler onto the corresponding tagger ( event handler tagger dictionary). A call of the get event handlers to run method is accompanied by the event handler which created the preceding event and by the extracted active global state. The event handler is first mapped onto its tagger. The taggers returned by the create taggers dictionary then generate the in-states identifiers, which are returned together with the corresponding event handlers (in a dictionary). For the initial call of the get event handlers to run method no information on the preceding event handler can be provided. This is solved by initially returning the start-of-run event handler. Similarly the trash taggers dictionary is used on each call of get trashable events. The corresponding event handlers are then also liberated, meaning that the activator can return them in the next call of the get event handlers to run method. 4 For this, the activator internally splits the pool of all event handlers of a given tag internally into those with a scheduled candidate event and the ones that are available to take on new candidate events. The activator also maintains the internal state. In JF-V1.0, the internal state consists in cell-occupancy systems. Therefore, the internal state is an instance of a class that inherits from the CellOccupancy class, which itself inherits from the abstract InternalState class. Taggers may refer to internalstate information to determine the in-states of their event handlers. The celloccupancy system does not double up on the information available in the state handler. It keeps track of the identifier of a particle (which may correspond to a point mass or a composite point object), but does not store or copy the particle itself (see Section 4.3). The mediator can access the internal state via the get info internal state method (see Fig. 8). To acquire consistency between the global state and the internal state (and between a particle and its associated unit), a pseudo-factor triggers an event for each active unit tracked by the cell-occupancy system that crosses a cell boundary (see Fig. 2b). The internal state is updated in each call of the get event handlers to run method. Scheduler The scheduler is an instance of a class inheriting from the abstract Scheduler class. It keeps track of the candidate events and their associated event-handler references. Its get succeeding event method selects among the candidate events the one with the smallest candidate event time, and it returns the reference of the corresponding event handler. Its push event method receives a new candidate event time and event-handler reference. Its trash event method eliminates a candidate event, based on the reference of its event handler. In JF-V1.0, the scheduler is an instance of the HeapScheduler class. It implements a priority queue through the Python heapq module. Input-output handler The input-output handler is an instance of the InputOutputHandler class. The input-output handler connects the JF application to the outside world, and it is accessible by the mediator. The input-output handler breaks up into one input handler (an instance of a class that inherits from the abstract InputHandler class) and a possibly empty list of output handlers (instances of classes that inherit from the abstract OutputHandler class). These are accessed by the mediator only via the input-output handler. Output handlers can also perform significant calculations. The input handler enters the initial global physical state into the application. JF-V1.0 provides an input handler that enters protein-data-bank formatted data (.pdb files) as well as an input handler which samples a random initial state. The initial state (constructed as a tree for the case of the tree state handler) is returned when calling the read method of the input-output handler, which calls the read method of the input handler. The output handlers serve many purposes, from the output in .pdb files to the sampling of correlation functions and other observables, to a dump of the entire run. They obtain their arguments (for example the entire global state) via its write method. The write method of the input-output handler receives the desired output handler as an additional argument through the mediating methods of specific event handlers. These are triggered for example after a sampling or an end-of-run event. The corresponding event handlers are initialized with the name of their output handlers. JF event-handler classes Event handler classes differ in how they provide the send event time and send out state methods. Event handlers split into those that realize factors and sets of factors and those that realize pseudo-factors and sets of pseudofactors. The first are required by ECMC while the second permit JF to represent the entire run in terms of events. Event handlers for factors or sets of factors Event handlers that realize a factor M , or a set of factors are implemented in different ways depending on the analytic properties of the factor potential U M and on the number of independent active units. Invertible-potential event handlers In JF, an invertible factor potential U M (an instance of a class that inherits from the abstract InvertiblePotential class) has its event rate integrated in closed form along a straight-line trajectory (see Fig. 1). The sampled cumulative event rate (U + M in [9, eq. (45)]) provides the displacement method. Together with the time stamp and the velocity of the active unit, this determines the candidate event time. In JF-V1.0, the two-leaf-unit event handler (an instance of the TwoLeafUnitEventHandler class) is characterized by two independent units at the leaf level. It realizes a two-particle factor with an invertible factor potential. The in-state (an argument of the send event time method) is stored internally, and it remains available for the subsequent call of the send out state method. Because of the two independent units, the lifting simply consists in these two units switching their velocities (using the internal exchange velocity method) and keeping the velocities of all induced units consistent. Event handlers for factors with bounding potential For a factor potential U M that is not inverted (by choice or by necessity because it is non-invertible), the cumulative event rate U + M is unavailable (or not used) and so is its displacement method. Only the derivative method is used. To realize such a factor without an inverted factor potential, an event handler then uses the displacement method of an associated bounding potential whose event rate at least equals that of U M and that is itself invertible. A noninverted U M may be associated with more than one bounding potential, each corresponding to a different event handler (the molecular Coulomb factors in Section 5.2 associate the Coulomb factor potential in the same run with different bounding potentials). In JF-V1.0, a number of event handlers are instances of classes that inherit from the EventHandlerWithBoundingPotential class, and that realize factors with bounding potentials. Each of these event handlers translates the sampled displacement of the bounding potential into a candidate event time. On an out-state request (via the send out state method), the event handler confirms the event with probability that is given by the ratio of the event rates of the factor potential and the bounding potential. The out-state consists of independent units together with their branches of induced units. For two independent units, the lifting limits itself to the application of a local exchange velocity method, which exchanges independent-unit velocities and enforces velocities for the induced units. For more independent units, the out-state calculation requires a lifting. For an unconfirmed event, no lifting takes place. In JF-V1.0, confirmed and unconfirmed event have time-sliced out-states. Inefficient treatment of unconfirmed events is the main limitation of this version of the application. A special case of a bounding potential is the cell-based bounding potential which features piecewise cell-bounded event rates. The two independent units are localized within their respective cells, and the bounding potential's rate is for all positions of the units larger than the factor potential event rate. In JF-V1.0, the constant cell-bounded event rate is determined for all pairs of cells on initialization (see Section 4.4.4). The resulting displacement may move the independent active unit outside its cell. The proposed candidate event will then however be preempted by a cell-boundary event and then trashed (see Section 3.2.1). Cell-veto event handlers Cell-veto event handlers (instances of a number of classes that inherit from the abstract CellVetoEventHandler class) realize sets of factors, rather than a single factor. The factor in-states (for each element of the set) are not transmitted with the candidate-event-time request. Instead, the branch of the independent active unit is an argument of the send event time method. The sampled factor in-state is transmitted with the out-state request. The cell-veto event handler implements Walker's algorithm [37] in order to sample one element in the set of factors in O(1) operations. Cell-veto event handlers are instantiated with an estimator (see Section 4.6). In addition, they obtain a cell system which is read in through its initialize method (see Section 4.2). The estimator provides upper limits for the event rate (in the given direction of motion) for the independent active unit anywhere in one specific cell (called "zero-cell", see Section 4.3), and for a target unit in any other cell, except for a list of excluded cells. These upper limits can be translated from the zero-cell to any other active-unit cell, because of the homogeneity of the simulation box. In JF-V1.0, the cell systems for the cell-veto event handler can be on any level of the particles' tree representation (see Section 5.3.4, where a molecule-cell system tracks individual water molecules on the root level, while a oxygen-cell system tracks only the leaf nodes corresponding to oxygens). A Walker sampler is an instance of the Walker class in the event handler package. It provides the total event rate (total rate), that for a homogeneous periodic system is a constant throughout a run. On a candidate-event-time request, a cell-veto event handler computes a displacement, but no longer through the displacement method of a factor potential or a bounding potential, but simply as an exponential random number divided by the total event rate. (The particularly simple send event time method of a cell-veto event handler is implemented in the abstract CellVetoEventHandler class.) (see [9] for a full description). The Walker sampler's sample cell method samples the cell of the target unit in O(1). It is returned, together with the candidate event time, as an argument of the send event time method. The out-state request is accompanied by the branch of the independent unit in the target cell, if it exists. Confirmation of events and, possibly, lifting are handled as in Section 3.1.2. Event handlers for pseudo-factors or sets of pseudo-factors The pseudo-factors of JF unify the description of the ECMC time evolution entirely in terms of events. The distinction between event handlers that realize pseudo-factors and those that realize sets of pseudo-factors remains crucial. In the former, the factor in-state is known at the candidate-event-time request. It is transmitted at this moment and kept in the memory of the event handler for use at the out-state request. For a set of pseudo-factors, the factor in-state can either not be specified at the candidate-event-time request, or would require transmitting too much data (one in-state per element of the set). It is therefore transmitted later, with the out-state-request (see Fig. 9). Cell-boundary event handler In the presence of a cell-occupancy system, JF-V1.0 preserves consistency between the tracked particles of the global physical state and the corresponding units (which must belong to the same cell). This is enforced by a cell-boundary event handler, an instance of the CellBoundaryEventHandler class. This event handler has a single independent unit and realizes a pseudo-factor with a single identifier. A cell-boundary event leads to the internal state to be updated (see Section 2.4). On instantiation, a cell-boundary event handler receives a cell system. (Each cell-occupancy system requires one independent cell-boundary event handler.) A candidate-event-time request by the mediator is accompanied by the in-state contained in a single branch and a single unit on the level tracked by the celloccupancy system. An out-state request is met with the cell-level-unit's position corresponding to the minimal position in the new cell. Event handlers for sampling, end-of-chain, start-of-run, end-of-run Sampling event handlers are instances of classes that inherit from the abstract SamplingEventHandler class. Sampling event handlers are expected to produce output (they inherit from the EventHandlerWithOutputHandler class and are connected, on instantiation, with their own output handler which is used in the mediating method of this event handler). Several sampling event handlers may coexist in one run. Their output handler is responsible for computing physical observable at the sampling event time (see Section 2.6). JF-V1.0 implements sampling events as the time-slicing of all the active units. A sampling event handler thus realizes a set of single-unit pseudo-factors, and the in-state is not specified at the candidate-event-time request. In JF-V1.0, the candidate event times of the sampling event handler are equally spaced. The out-state request is accompanied by branches of all independent active units, which are then all time-sliced simultaneously. Sampling candidate events are normally trashed only by themselves and by an end-of-run event. End-of chain event handlers are instances of classes that inherit from the abstract EndOfChainEventHandler class. They effectively stop one event chain and reinitialize a new one. This is often required for the entire run to be irreducible (see [9]). The end-of-chain event handler clearly realizes a set of pseudo-factors, rather than a single pseudo-factor (see Fig. 9a). An end-ofchain event handler implements a method to sample a new direction of motion. In addition, it implements a method to determine a new chain length (that gives the time of the next end-of-chain event) and, finally, the identifiers of the next independent active cnodes. For this, the end-of-chain event handler is aware of all the possible cnode identifiers (see Section 4.2). On an event-time request, the end-of-chain event handler returns the next candidate event time (computed from the new chain length) and the identifier of the next independent active cnode. The out-state request is accompanied by the current and the succeeding independent active units and their associated branches (see Fig. 9b). For the out-state, the event handler determines the next direction of motion (see Fig. 9c). A start-of-run event handler (an instance of a class that inherits from the abstract StartOfRunEventHandler class) is the sole event handler whose presence is required. The start-of-run event is the first one to be committed to the global state, because its candidate event time is set equal to the initial time of the run (usually zero) and because the activator will initially only activate the start-of-run event handler. The start-of-run event handler serves two purposes. First, it sets the initial lifting state. Second, the activator uses the start-of-run event handler as an entry point. Its tag (the start of run tag in the configuration files of Section 5) is then used to determine the events that should be activated and created thereafter. The end-of-run event handler (an instance of a class that inherits from the abstract EndOfRunEventHandler class) terminates a run by raising an end-ofrun exception and thus ends the mediator loop. An end-of-run event handler is usually connected, on instantiation, with its own output handler. In JF-V1.0, its send event time method returns the total run-time, which transits from the configuration file. On the send out state request, all active units are time-sliced. The end-of-run output handler may further process the global state which it receives via the mediating method of the end-of-run event handler. Event handlers for rigid motion of composite point objects, mode switching The event handlers of JF-V1.0 are generally suited for the rigid motion of composite point objects (root mode), that is, for independent non-leaf-node units (as implemented in Section 5.2.4). This is possible because all event handlers keep the branches of independent units consistent. As the subtree-node units of an independent-unit node move rigidly, the displacement is not irreducible. Mode switching into leaf mode (with single active leaf units) then becomes a necessity in order to have all factors be considered during one run and to assure the irreversibility of the implemented algorithm. In JF-V1.0, the corresponding event handlers are instances of the RootLeafUnitActiveSwitcher class. On instantiation, they are specified to switch either from leaf mode to root mode or vice versa. These event handlers resemble the end-of-chain event handler, but only one of them is active at any given time. They provide a method to sample the new candidate event time based on the time stamp of the active independent unit at the time of its activation. An out-state request from one of these event handlers is accompanied by the entire tree of the current independent active unit of one mode and met with the tree of the independent active unit on the alternate mode. JF run specifications and tools The JF application relies on a user interface to select the physical system that is considered, and to fully specify the algorithm used to simulate it. Inside the application, some of these choices are made available to all modules (rather than having to be communicated repeatedly by the mediator). The application also relies on a number of tools that provide key features to many of its parts. Configuration files, logging The user interface for each run of the JF application consists in a configuration file that is an argument of the executable run.py script. 5 It specifies the physical and algorithmic parameters (temperature, system shape and size, dimension, type of point masses and composite point objects, and also factors, factor potentials, lifting schemes, total run time, sampling frequency, etc). A configuration file is composed of sections that each correspond to a class requiring input parameters. The [Run] section specifies the mediator and the setting. The ensuing sections choose the parameters in the init methods of the mediator and of the setting. Each section contains pairs of properties and values. The property corresponds to the name of the argument in the init method of the given class, and its value provides the argument (see Fig. 12). The content of the configuration file is parsed by the configparser module and passed to the JF factory (located in the base.factory module) in run.py. Standard Python naming conventions are respected in the classes built by the JF factory, which implies the naming conventions in the configuration file (see Section 6.3 for details). Within the configuration file, sections can be written in any order, but their explicit nesting is not allowed. The nestedness is however implicit in the structure of the configuration file. The JF application returns all output via files under the control of output handlers. Run-time information is logged (the Python logging module is used). Logged information can range from identification of CPUs to the initialization information of classes, run-time information, etc. Logging output (to standard output or to a file) can take place on a variety of levels from DEBUG to INFO to WARNING that are controlled through arguments of run.py. An identification hash of the run is part of the logging output. It also tags all the output files so that input, output and log files are uniquely linked (the Python uuid module is used). Globally used modules JF-V1.0 requires that all trees representing composite point objects are identical and of height at most two. Furthermore, in the N V T physical ensemble, the particle number, system size and temperature remain unchanged throughout each run. After initialization, as specified in the configuration file, these parameters are stored in the JF setting package and the modules therein, which may be imported by all other modules, which can then autonomously construct identifiers. Helper functions for periodic boundary conditions (if available) and for the sampling of random positions are also accessible. JF-V1.0 implements hypercubic and hypercuboid setting modules. Both settings define the inverse temperature and also the attributes of all possible particle identifiers, which are broadcast directly by the setting package. In contrast, the parameters of the physical system are accessed only using the modules of the specific setting (for example the setting.hypercubic setting module). 6 The setting package and its modules are initialized by classes which inherit from the abstract Setting class. The HypercuboidSetting class defines only the hypercuboid setting, the HypercubicSetting class, however, sets up both the hypercubic setting and the hypercuboid setting modules together with the setting package. This allows modules that are specifically implemented for a hypercuboid setting to be used with the hypercubic setting. Each setting can implement periodic boundaries, by inheriting from the abstract PeriodicBoundaries class and by implementing its methods. Since many modules of JF only rely on periodic boundaries but not on the specific setting, the setting package gives also access to the initialized periodic boundary conditions. Similarly, a function to create a random position is broadcast by the setting package. All the configuration files in Section 5 are for a three-dimensional cubic simulation box, that is, use the hypercubic setting with dimension = 3. Additional useful modules are located in the JF base package. The abstract Initializer class located in the initializer module enforces the implementation of an initialize method. This method must be called before other public methods of the inheriting class. The strings module provides functions to translate strings from snake to camel case and vice versa, as well as to translate a package path into a directory path. Helper functions for vectors, such as calculating the norm or the dot product, are located in the vectors module. Cell systems and cell-occupancy systems A cell-occupancy system is an instance of a class that inherits from the abstract CellOccupancy class, located in the activator. Any cell-occupancy system is associated with a cell system, itself an instance of a class that inherits from the abstract Cells class. In JF-V1.0, the cell system consists in a regular grid of cells that are referred to through their indices. Cells can be iterated over with the yield cells method. For a given cell, the excluded cells are accessed by the excluded cells method, the successor cell in a suitably defined direction by the successor method and the lower and upper bound position in each direction through the cell min and cell max methods (see Fig. 10a). Finally, the position to cell method returns the cell for a given position. Cell systems with periodic boundary conditions are described as periodic cell systems (instances of classes that inherit from the abstract PeriodicCells class, which itself inherits from the Cells class). Their zero cell property corresponds to the cell located at the origin. Their relative cell method receives a cell and a reference cell, and establishes equivalence between the relative and the zero-cell. The inverse to this is the translate method (see Fig. 10b). A cell-occupancy system (which is located in the activator) associates the identifiers of cell-based particles and of surplus particles with a cell. It also stores active cells, that is, cells that contain an active unit (see Fig. 11). Cellbased and surplus particles in the state handler correspond to units with zero velocity, so that there is no real distinction between units and particles for them. The cell-occupancy system inherits from the abstract InternalState class and therefore provides getitem and update methods. The former returns a particle identifier based on a cell, whereas the latter updates the cell occupancies based on the currently active units. This keeps the internal state consistent with the global state. Moreover the cell-occupancy may iterate over surplus particle identifiers via the yield surplus method. The active cells and the corresponding identifiers of the active units are generated using the yield active cells method (see Fig. 11). JF-V1.0 implements the SingleActiveCellOccupancy class which features only a single active cell and which keeps the active unit identifier among its Figure 11: Cell-occupancy system, an internal state of the activator, with active units accounted for differently from surplus and cell-based particles. Only a fixed number of cell-based particle identifiers are allowed per cell (here one per cell). Surplus-particle identifiers may be iterated over from the outside of the cell-occupancy system with a yield surplus method. In JF-V1.0, surplus particles form an internal dictionary mapping the cell onto the particle identifier. private attributes. The cell-based particle identifiers are stored in an internal occupant list, and surplus-particle identifiers are stored in an internal surplus dictionary mapping the cell indices onto the surplus-particle identifiers. The stored cell-occupancy system can address different levels of composite particles: one cell-occupancy system may track particles (and units) associated to root nodes, and another one particles that go with leaf nodes. This is set on initialization via the cell level property which equals the length of the particle identifier tuple. The concerned cell system is itself set on initialization. An indicator charge allows one to select specific particles on a given level for tracking. A single run can feature several internal states stored within the activator. These instances may rely on different cell-occupancy systems and cell systems. For consistency between internal states and the global state, each cell-occupancy system requires its own cell-boundary event handler. Inter-particle potentials and bounding potentials In JF, potentials play a dual role, as factor potentials U M to event handlers but also as bounding potentials for factor potentials U M . Potentials are located in the JF potential package. They inherit from the abstract Potential class and provide a derivative method. They may in addition inherit from the abstract InvertiblePotential class, and must then provide a displacement method. In JF-V1.0, derivatives and displacements are with respect to the positive change of the active unit along one of the coordinates (indicated through the direction). For a potential U (r ij ) and direction = 0, the derivative is for example given by [∂/∂x i U (r ij )]. Inverse-power-law potential, Lennard-Jones potential The inverse-power-law potential (an instance of the InversePowerPotential class that inherits from the abstract InvertiblePotential class) concerns the separation vector r ij = r j − r i (without periodic boundary conditions, in ddimensional space) between a unit j and an active unit i as Here, k and p > 0 correspond to the prefactor and power parameters set on initialization. The charges c i and c j are entered into the methods of the potential as parameters charge one and charge two. This allows one instance of the InversePowerPotential class to be used for different charges. The derivative method is straightforward, while the displacement method distinguishes the repulsive (c i c j k > 0) and the attractive (c i c j k < 0) cases. The Lennard-Jones potential (an instance of the LennardJonesPotential class) implements the Lennard-Jones potential where r ij = r j − r i is the separation vector (without periodic boundary conditions, in d-dimensional space) between a unit j and an active unit i. and where k LJ and σ correspond to the parameters prefactor and characteristic length set on instantiation. This Lennard-Jones potential provides a straightforward derivative method. Its displacement method relies on an algebraic inversion. Displaced-even-power-law potential An instance of the DisplacedEvenPowerPotential class that inherits from the abstract InvertiblePotential class, the displaced-even-power-law potential, concerns the separation vector r ij = r j − r i (without periodic boundary conditions, in d-dimensional space) between a unit j and an active unit i where k depp > 0, p ∈ {2, 4, 6, . . . }, and r 0 , respectively, are the parameters prefactor, power, and equilibrium separation parameters set on instantiation. The derivative and displacement methods are provided analytically. Merged-image Coulomb potential and bounding potential An instance of the MergedImageCoulombPotential class that inherits from the abstract Potential class, the merged-image Coulomb potential is defined for a separation vector r ij = r j − r i (with periodic boundary conditions, in three-dimensional space) between a unit j and an active unit i as where L = (L x , L y , L x ) are the sides of the three-dimensional simulation box with periodic boundary conditions. The conditionally convergent sum in eq. (9) can be consistently defined in terms of "tin-foil" boundary conditions [21]. It then yields an absolutely convergent sum, partly in real space and partly in Fourier space (see [9,Sect. IIIA]), with α a tuning parameter and q = 2πm/L, m ∈ Z 3 . JF-V1.0 provides this class for a cubic simulation box, with parameters that are optimized to reach machine precision for its derivative method. Summations over n and m are taken within spherical cutoffs, namely for all |n| ≤ position cutoff and |m| ≤ fourier cutoff except that m = (0, 0, 0). (The potential in eq. (10) differs from the tin-foil Coulomb potential in a constant self-energy term that does not influence the derivatives.) The merged-image Coulomb potential is not invertible. When it serves as a factor potential, bounding potentials provide the required displacement method. JF-V1.0 provides a merged-image Coulomb bounding potential as an instance of the InversePowerCoulombBoundingPotential class, with Here, r ij,0 is the minimum separation vector, that is, the vector between r i and the closest image of r j under the periodic boundary conditions. (The mergedimage Coulomb bounding potential thus involves no sum over periodic images.) The constant k b is chosen as so that the factor-potential event rate is bounded. A constant k b 1.5836 (the parameter prefactor) is appropriate for a cubic simulation box. The mergedimage Coulomb bounding potential is closely related to the inverse-power-law potential of eq. (6) with p = 1, although the restriction to the minimum separation vector makes that the latter cannot be used directly. Cell-based bounding potential A cell-based bounding potential is an instance of a class that inherits from the abstract InvertiblePotential class. It bounds the derivative of the factor potential inside certain cell regions by constants. These constants can be computed analytically on demand or even sampled using a separate Monte Carlo algorithm. On initialization, a cell-based bounding potential receives an estimator (see Section 4.6). Also the information about the cell system is transmitted. Then, the cell-based bounding potential iterates over all pairs of cells (making use of periodic boundary conditions) and determines an upper and lower bound derivative for the factor units being in those cells for each possible direction of motion using the estimator. Here, the cell-based bounding potential is not applied to excluded cells, where the cell-bounded event rate diverges, is simply too large, or otherwise inappropriate. The constant-derivative bound leads to a piecewise linear invertible bounding potential. The call of the displacement method is accompanied by the direction of motion, the charge product, the sampled potential change and the cell separation. In JF-V1.0, any cell-based bounding potential requires a cellboundary event handler, that detects when the displacement proposed by the displacement method in fact takes place outside the cell for which it is computed. Three-body bending potential The SPC/Fw water model of Section 5.3 includes a bending potential (an instance of the BendingPotential class), which describes the fluctuations in the bond angle within each molecule. For the three units i, j, and k within such a molecule in three-dimensional space (with j being the oxygen), it is given by Here, φ {i,j,k} (r ij , r jk ) denotes the internal angle between the two hydrogenoxygen legs. The constants k b and φ 0 are set on initialization of the potential (see [9]). The derivative method is provided explicitly for this potential, which is however not invertible. In JF-V1.0, the associated bounding potential is constructed dynamically by an event handler 7 which dynamically constructs a piecewise linear bounding potential. Here, the event handler speculates on a constant bounding event rate through its position between two subsequent time-sliced positions of the active unit: q bounding = max{q(r), q(r + v∆t)} + const where q(r) is the potential derivative at r. The interval length |v∆t| and the constant offset are input from the configuration file. Fine-tuning provides an efficient bounding potential that does not under-estimate the event rate, yet limits the ratio of unconfirmed events. Lifting schemes Event handlers with more than two independent units require a lifting scheme (an instance of a class that inherits from the Lifting class). The event handler calls a method of the lifting scheme to compute its out-state. At first, the event handler prepares factor derivatives of relevant time-sliced units. The derivative table (see [9,Figs 2 and 10]) is filled with unit identifiers, factor derivatives and activity information through its insert method. Finally, the event handler calls the get active identifier method that returns the identifier of the next independent active unit. The lifting scheme's reset method deletes the derivative table. It is called before the first derivative is inserted. JF-V1.0 implements the ratio, inside-first and outside-first lifting schemes for a single independent active unit (see [9,Sect. IV]). Estimator Estimators (instances of a class that inherits from the abstract Estimator class) determine upper and lower bounds on the factor derivative in a single direction between a minimum and maximum corner of a hypercuboid for the possible separations. For this, they provide the derivative bound method. Both upper and lower bounds are useful when the potential can have either positive and negative charge products (as happens for example for the mergedimage Coulomb potential as a function of the two charges). In general, an estimator compares the factor derivatives for different separations in the hypercuboid to obtain the bounds. These are corrected by a prefactor and optionally by an empirical bound, which are set on instantiation (together with the factor potential). JF-V1.0 provides estimators which either regard regularly or randomly sampled separations within the hypercuboid. The inner-point and boundary-point estimators vary the separation evenly within the hypercuboid or on the edge of the hypercuboid, respectively. For these separations, the factor potential derivatives (optionally including charges) are compared. Two more estimators consider the interaction between a charged active unit and two oppositely charged target units within a dipole. Here, the factor derivative is summed for the two possible active-target pairs. A Monte-Carlo estimator distributes both the separation and the dipole orientation randomly. The dipole-inner-point estimator varies the separations evenly but aligns the dipole orientation along the direction of the gradient of the factor derivative. The implemented estimators are appropriate for the cookbook examples of Section 5, where the upper and lower bounds on the factor derivatives (and equivalently on the event rates) must be computed for a small number of cell pairs only. JF Cookbook The configuration files 8 in JF-V1.0 introduce to the key features of the application by constructing runs for two charged point masses, for two interacting dipoles of charges, and for two interacting water molecules (using the SPC/Fw model). All configuration files are for a three-dimensional cubic simulation box with periodic boundary conditions, and they reproduce published data [9]. As specified in their [Run] sections, the configuration files use a singleprocess mediator (an instance of the SingleProcessMediator class), and the setting package is initialized by an instance of the HypercubicSetting class (see for example Fig. 12). All configuration files in the directory use a heap scheduler (an instance of the HeapScheduler class), a tree state handler (instance of the TreeStateHandler class), as well as an tag activator (an instance of the TagActivator class) in order to activate event handlers, trash candidate events and prepare in-states. The start of run, end of run, end of chain, and sampling event handlers (that realize common pseudo-factors) are implemented in largely analogous sections across all the configuration files, although their parent sections (that define the corresponding taggers) provide different tag lists for trashing and activation of event handlers. The corresponding tagger sections are presented in detail in Section 5.1.1, and only briefly summarized thereafter. Interacting atoms The configuration files in the coulomb atoms directory of JF-V1.0 implement the ECMC sampling of the Boltzmann distribution for two identical charged point masses. They interact with the merged-image Coulomb pair potential and are described by a Coulomb pair factor. One of the two point masses is active, and it moves either in +x, in +y, or in +z direction. Statistically equivalent output is obtained for the merged-image Coulomb pair potential (the factor potential) associated with the inverse-power bounding potential (Section 5.1.1), or else with a cell-based bounding potential, either realized directly (Section 5.1.2), or through a cell-veto event handler (Section 5.1.3). Although the configuration files use the language of Section 1.2 for the representation of particles, all trees and branches are trivial, and each root node is also a leaf node. Atomic factors, inverse-power Coulomb bounding potential The configuration file coulomb atoms/power bounded.ini implements a single Coulomb pair factor with the merged-image Coulomb factor potential that is associated with its inverse-power Coulomb bounding potential. The same event handler realizes this factor for any separation of the point masses. The activator requires no internal state. Although it would be feasible to directly implement (that is, hard-wire) all event handlers for this simple system, the tag activator is used. All event handlers are thus accessed via taggers that are listed, together with their tags, in the [TagActivator] section (see Fig. 13 for a tree representation of the sections). The coulomb tagger is an instance of the FactorTypeMapInStateTagger class, indicating that its event handlers require a specific in-state created from a pattern stored in a file indicated in the [FactorTypeMaps] section. This pattern mirrors the factor index sets and factor types for a system with two root nodes (see eq. (2)). The entry [0, 1], Coulomb in this file indicates that, for two point masses, a Coulomb potential would act between particles 0 and 1. From this information, the tagger's yield identifiers send event time method generates all the in-state identifier for any number of point masses. The [Coulomb] section specifies input for the coulomb tagger's tag lists (the creates list and the trashes list). Here, a coulomb event creates and trashes only coulomb candidate events (see the configuration file of Section 5.3.1 for different tag lists for the same coulomb event handlers). The [Coulomb] section further specifies that the coulomb event handler is an instance of the TwoLeafUnitBoundingPotentialEventHandler class and that, for two point masses, only one coulomb event handler is needed. The corresponding section 9 specifies the factor potential to be an instance of the MergedImageCoulombPotential class. It specifies the bounding potential as an instance of the InversePowerCoulombBoundingPotential class. The sampling, end of chain, start of run and end of run taggers are all instances of the NoInStateTagger class (their event handlers require no in-state), and also provide their event handlers and their tag lists, which are then transmitted to the tag activator. Each of these taggers' yield identifiers send event time methods yields the in-state identifiers needed by the taggers' event handlers in order to realize corresponding factors or pseudo-factors. The configuration file's [InputOutputHandler] section specifies the inputoutput handler. It consists of the separation-output handler (an instance of the SeparationOutputHandler class), which is connected to the sampling event handler. In the present example, it samples the nearest-image separation (under periodic boundary conditions) of any two point masses. The initial global physical state is created randomly by the random-input handler (an instance of the RandomInputHandler class). The configuration file coulomb atoms/power bounded.ini reproduces published data (see Fig. 14, 2 ). The configuration file coulomb atoms/power bounded.ini can be modified for N point masses. In the [RandomInputHandler] section, the number of root nodes must then equal N . In the [Coulomb] section, the number of event handlers must be set to at least N −1 (this instructs the Coulomb tagger to deepcopy the required number of event handlers). Without changing the factor-type map with respect to the N = 2 case, each event handler which will be presented with the correct in-state corresponding to a pair of units with one of them being the active unit. The complexity of the implemented algorithm is O(N ) per event. Atomic factors, cell-based bounding potential The configuration file coulomb atoms/cell bounded.ini implements a single Coulomb pair factor with the merged-image Coulomb potential, just as the configuration file of Section 5.1.1. However, a cell-occupancy internal state associates the factor potential with a cell-based bounding potential. The target (non-active) unit may be cell-based or surplus (see Fig. 11). The target unit may also be in an excluded nearby cell of the active cell (see Fig. 10), for which the cell-based bounding potential cannot be used. In consequence, three taggers correspond to distinct event handlers that together realize the Coulomb pair factor. The consistency requirement of JF-V1.0 assures that particles and units are always associated with the same cell. Taggers and their tags are listed in the [TagActivator] section. The coulomb cell bounding tagger, for example, appears as an instance of the CellBoundingPotentialTagger class. The coulomb cell bounding event handler then realizes the Coulomb factor unless the cell of the target particle is excluded with respect to the active cell and unless it is a surplus particle (in these cases the tagger does not generate any in-state for its event handler). Otherwise, the Coulomb pair factor is realized by a coulomb surplus-tagged event handler or by a coulomb nearby event handler. (For two units, as the active unit is taken out of the cell-occupancy system, no surplus candidate events are ever created.) The cell-occupancy systems (an instance of the SingleActiveCellOccupancy class) is also declared in the [TagActivator] section and further specified in the [SingleActiveCellOccupancy] section. The associated cell system is described in the [CuboidPeriodicCells] section. The internal state, set in the [SingleActiveCellOccupancy] section, has no charge value. This indicates that the identifiers of all particles at the cell level (here cell level = 1) are tracked (see Section 5.3.2 for an example where this is handled differently). The coulomb nearby tagger, an instance of the ExcludedCellsTagger class, yields the identifiers of particles in excluded cells of the active cell, by iterating over cells and by checking whether they contain appropriate identifiers. The coulomb surplus tagger similarly relies on the yield surplus method of the cell-occupancy system to generate in-states. To keep the internal state consistent with the global state, a cell-boundary event handler is used in the CellBoundaryTagger class (together, this builds cell boundary candidate events). The cell-boundary tagger just yields the active-unit identifier as the in-state used in the corresponding event handler. The configuration file coulomb atoms/cell bounded.ini reproduces published data (see Fig. 14, 3 ). To adapt the configuration file for N > 2 point masses (from the N = 2 case that is provided), in the [RandomInputHandler], number of root nodes must be set to N . The number of coulomb cell bounding, coulomb nearby, and coulomb surplus event handlers must be increased. Surplus particles can now exist. The number of event handlers to allow for depends on the cell system, whose parameters must be adapted in order to limit the number of surplus particles, and also to retain useful cell-based bounds for the Coulomb event rates. Atomic factors, cell-veto The configuration file coulomb atoms/cell veto.ini implements a Coulomb pair factor together with the merged-image Coulomb potential. A cell-occupancy internal state is used. The Coulomb pair factor is then realized, among others, by a cell-veto event handler, which associates the merged-image Coulomb potential with a cell-based bounding potential. All the Coulomb pair factors of the active particle with target particles that are neither excluded nor surplus are taken together in a set of Coulomb factors, and realized by a single coulomb cell veto event handler. The candidate eventtime can be calculated with the branch of the active unit as the in-state, which is implemented in the CellVetoTagger class. (The cell-veto tagger returns the identifier of the active unit.) The event handler returns the target cell (in which the candidate unit is to be localized) together with the candidate event time. The out-state request is accompanied by the branch of the target unit (if it exists), and the out-state computation is in analogy with the case studied in Section 5.1.2. The configuration file features the coulomb cell veto tag together with the coulomb nearby, coulomb surplus, cell boundary, sampling, end of chain, start of run, and end of run tags. The configuration file reproduces published data (see Fig. 14, 4 ). To adapt the configuration file for N point masses, the number of root nodes must be set to N in the [RandomInputHandler] section. The number of event handlers for the coulomb nearby and coulomb surplus events might have to be increased. However, a single cell-veto event handler realizes any number of factors with cell-based target particles whereas in Section 5.1.2 each of them required its own event handler. Interacting dipoles The configuration files in the dipoles directory of JF-V1.0 implement the ECMC sampling of the Boltzmann distribution for two identical finite-size dipoles, for a model that was introduced previously [9]. Point masses in different dipoles interact via the merged-image Coulomb potential (pairs 1 − 3, 1 − 4, 2 − 3, 2 − 4 in Fig. 15). Point masses within each dipole interact with a short-ranged potential (pairs 1 − 2 and 3 − 4). A repulsive short-range potential between oppositely charged atoms in different dipoles counterbalances the attractive Coulomb potential at small distances (pairs 1 − 4 and 2 − 3). Each dipole is a composite point object made up of two oppositely charged point masses. It is represented as a tree with one root node that has two children. The number of root nodes in the system is set in the [RandomInputHandler] section of the configuration file, where the dipoles are created randomly through the fill root node method in the DipoleRandomNodeCreator class. In the setting package, the input handler specifies that there are two root nodes (number of root nodes = 2). Each of them contains two nodes (which is coded as number of nodes per root node = 2) and the number of node levels is two (number of node levels = 2). As this numbers are set in the setting package, all the JF modules can autonomously construct all possible particle identifiers. Statistically equivalent output is obtained for pair factors for all interactions (Section 5.2.1), for dipole-dipole Coulomb factors and their factor potential associated with a cell-based bounding potential (Section 5.2.2), for dipole-dipole Coulomb factors with the cell-veto algorithm (Section 5.2.3), and by alternating between concurrent moves of the entire dipoles with moves of the individual point masses (Section 5.2.4). The latter example showcases the collective-motion possibilities of ECMC integrated into JF. All configuration files here implement the short-ranged potential as an instance of the DisplacedEvenPowerPotential class with power = 2 and the repulsive short-range potential as an instance of the InversePowerPotential class with power = 6. Atomic Coulomb factors The configuration file dipoles/atom factors.ini implements for each concerned pair of point masses a Coulomb pair factor, with the merged-image Coulomb potential associated with the the inverse-power Coulomb bounding potential. Several event handlers that are instances of the same class realize these factors, and the number of event handlers must scale with their number. No internal state is declared. Pair factors are implemented for each pair of point masses that interact with a harmonic or a repulsive potential. One of the four point masses is active at each time, and it moves either in +x, in +y, or in +z direction. The configuration file represents composite point objects as trees with two levels (see Section 1.2). Positions and velocities are kept consistent on both levels, although the root-unit properties are not made use of. The tree structure only serves to identify leaf units on the same dipole. In the configuration file, taggers and tags are listed in the [TagActivator] section. The coulomb, harmonic, and repulsive taggers are separate instances of the same FactorTypeMapInStateTagger class, and the corresponding sections set up the corresponding event handlers. Both the harmonic and the repulsive event handlers are instances of the TwoLeafUnitEventHandler class. Aliasing nevertheless assures a tree-structured configuration file (the harmonic tagger is for example declared with a HarmonicEventHandler class which is an alias for the TwoLeafUnitEventHandler class). The coulomb tagger and its event handlers are treated as in Section 5.1.1. The sampling, start-of-run, end-of-run and end-of-chain pseudo-factors are realized by event handlers that are set up in the same way as in all other configuration files. However, the parent sections differ: the parent of the [InitialChainStartOfRunEventHandler] section sets the start of run tagger, which specifies that after the start of run event, new coulomb, harmonic, repulsive, sampling end of chain, and end of run event handlers must be activated. The tag lists thus differ from those of the [StartOfRun] section in other configuration files. The configuration file dipoles/atom factors.ini reproduces published data (see Fig. 15, 2 ). Molecular Coulomb factors, cell-based bounding potential The configuration file dipoles/cell bounded.ini implements for each pair of dipoles a Coulomb four-body factor. (The sum of the merged-image Coulomb potentials for pairs 1 − 3, 1 − 4, 2 − 3, 2 − 4 in Fig. 15 constitutes the Coulomb factor potential.) The event rates for such factors decay much faster with distance than for Coulomb pair factors, and the chosen lifting scheme considerably influences the dynamics (see [9,Sect. IV]). The configuration file installs a celloccupancy internal state on the dipole level (rather than for the point masses). A cell-bounded event handler then realizes a Coulomb four-body factor with its factor potential associated with an orientation-independent cell-based bounding potential for dipole pairs that are not in excluded cells relative to each other. The configuration file furthermore implements pair factors for the harmonic and the repulsive interactions. One of the four point masses is active at each time, and it moves either in +x, in +y, or in +z direction. The configuration file's [TagActivator] section defines all taggers and their corresponding tags. Among the taggers for event handlers realizing the Coulomb four-body factor, the coulomb cell bounding tagger differs markedly from the set-up in Section 5.1.2, as the event handler 10 is for a pair of composite point objects. The lifting scheme is set to inside first lifting. The bounding potential is defined in the [CellBoundingPotential] section. A dipole Monte Carlo estimator is used for simplicity (see Section 4.6). As it obtains an upper bound for the event rate from random trials for each relative cell orientations, its use is restricted to there being only a small number of cells. The coulomb nearby and coulomb surplus taggers are for event handlers realizing the Coulomb fourbody factor when the bounding potential cannot be used. In this case, the merged-image Coulomb potential is summed for the factor potential, but also for the bounding potential. 11 The standard sampling, end of chain, end of run, start of run taggers as well as the ones responsible for the harmonic and repulsive potentials are set up in a similar way as in Section 5.2.1. The [TagActivator] section defines the internal state that is used by the coulomb cell bounding, coulomb nearby, and coulomb surplus taggers. The [SingleActiveCellOccupancy] section specifies the cell level (cell level = 1 indicates that the particle identifiers have length one, corresponding to root nodes, rather than length two, which would correspond to the dipoles' leaf nodes). Positions and velocities must thus be kept consistent on both levels. The cell-occupancy system requires the presence of a cell boundary event handler, again on the level of the root nodes. This event handler is aware of the cell level, and it ensures consistency of the events triggered by the cellbased bounding potential with the underlying cell system. The configuration file dipoles/cell bounded.ini reproduces published data (see Fig. 15, 3 ). Molecular Coulomb factors, cell-veto The configuration file dipoles/cell veto.ini implements the same factors and pseudo-factors and the same internal state as the configuration file of Section 5.2.2. A single cell-veto event handler then realizes the set of factors that relate to cells that are not excluded for any number of cell-based particles, whereas in the earlier implementation, the number of cell-bounded event handlers must exceed the possible number of particles in non-excluded cells of the active cell. This is what allows to implement ECMC with a complexity of O(1) per event. The configuration file resembles that of Section 5.2.2. It mainly replaces the latter file's coulomb cell bounding event handlers with a coulomb cell veto event handler. Slight differences reflect the fact that a cell-veto event handler uses no displacement method of the bounding potential but obtains the displacement from the total event rate (see the discussion in Section 3.1.3). The configuration file dipoles/cell veto.ini reproduces published data (see Fig. 15, 4 ). Atomic Coulomb factors, alternating root mode and leaf mode The configuration file dipoles/dipole motion.ini implements two different modes. In leaf mode, at each time one of the four point masses is active, and it moves either in +x, in +y, or in +z direction (see Fig. 16a). In root mode, at each time the point masses of one dipole moves as a rigid block, in the same direction (see Fig. 16b). (The root mode, by itself, does not assure irreducibility of the Markov-chain algorithm, as the orientation and shape of any dipole molecule would remain unchanged throughout the run.) JF-V1.0 represents the dipoles as trees, and both modes are easily implemented. In leaf mode, the Coulomb factors are realized by coulomb leaf event handlers that are instances of the same class 12 as the coulomb nearby event handlers in Sections 5.2.2 and 5.2.3. The root mode, in turn, is patterned after the simulation of two point masses (as in Section 5.1.1): all inner-dipole potentials are constant. The inter-dipole Coulomb potentials sum up to an effective two-body potential, the factor potential of a two-body factor realized in a Coulomb-dipole event handler. The repulsive short-range potential between oppositely charged atoms in different dipoles also translates into a potential between the dipoles in rigid motion, and serves as a factor potential of a two-body factor, realized in a specific event handler. Taggers and their tags are listed in the [TagActivator] section. The harmonic leaf, repulsive leaf (leaf-mode) taggers, as well as all those related to event handlers that realize pseudo-factors are as in Section 5.2.1. The coulomb leaf tagger corresponds to the coulomb nearby tagger in Section 5.2.2. The coulomb root and repulsive root taggers are analogous to those in Section 5.1.1 for the two-atom case. As all other operations that take place in JF, the switches between leaf mode and root mode are also formulated as events. They are related to two pseudofactors and realized by a leaf to root event handler and by a root to leaf event handler, respectively. (These two event handlers are aliases for instances of the RootLeafUnitActiveSwitcher class.) The root to leaf and leaf to root taggers, in addition to the create and trash lists, set up separate activate and deactivate lists (see Section 2.4). The configuration file reproduces published data (see Fig. 15, 5 ). Of particular interest is that the tree representation of composite point objects keeps consistency between leaf-node units and rootnode units: the event handlers return branches of cnodes for all independent units (see Fig. 7) whose unit information can be integrated into the global state. Interacting water molecules (SPC/Fw model) The configuration files in the water directory implement the ECMC sampling of the Boltzmann distribution for two water molecules, using the SPC/Fw model that was previously studied with ECMC [9]. Molecules are represented as composite point objects with three charged point masses, one of which is positively charged (representing the oxygen) and the two others are negatively charged (representing the hydrogens). Point masses in different water molecules interact via the merged-image Coulomb potential. In addition, point masses within each molecule interact with a three-body bending interaction, and a harmonic oxygen-hydrogen potential. Finally, any two oxygens interact through a Lennard-Jones potential [9]. In the tree state handler (defined in the [TreeStateHandler] section, a child of the [SingleProcessMediator] section), water molecules are represented as trees with a root node and three children (the leaf nodes of the tree). The total number of water molecules (that is, of root nodes) is set in the [RandomInputHandler] section of each configuration file. The molecules are created through the fill root node method in the WaterRandomNodeCreator class. There are two node levels (number of node levels = 2) and three nodes per root node (number of nodes per root node = 3). The charges of a molecule are set in the [ElectricChargeValues] section (a descendant of the [WaterRandomNodeCreator] section). All the configuration files in the water directory of JF-V1.0 implement the pair harmonic factors that are realized through harmonic event handlers. The corresponding taggers are defined in the [Harmonic] sections, with the displaced even-power potential and its parameters set in the [HarmonicEventHandler] and [HarmonicPotential] sections. The configuration files furthermore implement the taggers corresponding to the three-body bending factors in their [Bending] sections. The bending event handler has three independent units (attached to branches). It thus requires a lifting scheme (which is chosen in the [BendingEventHandler] section), which is however unique (see [9, Fig. 2]). In all these configuration files, one of the six point masses is active, and it moves either in +x, in +y, or in +z direction (the optional rigid displacement of the entire water molecule, could be set up as in Section 5.2.4). Statistically equivalent output is obtained for a simple set-up featuring pair factors for the Coulomb potential and a Lennard-Jones interaction that is inverted (Section 5.3.1), or for a molecular-factor Coulomb potential associated with a power-law bounding potential and a cell-based Lennard-Jones bounding potential (Section 5.3.2). In addition, the cell-veto algorithm for the Coulomb potential coupled to an inverted Lennard-Jones potential (Section 5.3.3) is also provided. Finally, cell-veto event handlers take part in the realization of complex molecular Coulomb factors and also realize Lennard-Jones factors between oxygens (Section 5.3.4). This illustrates how multiple independent cell-occupancy systems may coexist within the same run. Atomic Coulomb factors, Lennard-Jones inverted The configuration file water/coulomb power bounded lj inverted.ini implements pair Lennard-Jones, harmonic and Coulomb factors. The Coulomb factors are realized for any distance of the point masses by event handlers that associate the merged-image Coulomb potential with its inverse-power Coulomb bounding potential. The Lennard-Jones potential is inverted. This configuration file needs no internal state. In the configuration file, the [TagActivator] section lists all the taggers together with their tags, which in addition to the taggers related to pseudo-factors, are reduced to coulomb, harmonic, bending, and lennard jones. The mergedimage Coulomb potential with its associated power-law bounding potential (both for attractive and repulsive charge products) is specified in the [Coulomb] section of the configuration file. The Lennard-Jones potential is invertible and its displacement method is used rather than that of a bounding potential. The output handler is defined in the [OxygenOxygenSeparationOutputHandler] section, a child of the [InputOutputHandler] section. It obtains all the units, extracts the oxygens through their unit identifier, and records the oxygenoxygen separation distance. This reproduces published data (see Fig. 17, 2 ). Molecular Coulomb factors, Lennard-Jones cell-bounded The configuration file water/coulomb power bounded lj cell bounded.ini for the water system corresponds to pair factors for the Lennard-Jones and the harmonic potentials and to molecular factors for the Coulomb interaction. The Coulomb factor potential is the sum of the merged-image Coulomb potential for the nine relevant pairs of point masses (pairs across two molecules). It is realized in a particular event handler, 13 analogously to how this is done for the Coulomb interaction in Sections 5.2.2 and 5.2.3. The associated bounding potential (both for attractive and repulsive charge combinations) is given by the sum over all the individual pairs. Although the Lennard-Jones interaction can be inverted, the configuration file sets up a cell-occupancy internal state that tracks the identifiers for the oxygens. As in previous cases, this leads to three types of events, corresponding to the nearby, surplus, and cell-based particles, in addition to cell-boundary events. Taggers and their tags are listed in the [TagActivator] section. Taggers are generally utilized as in other configuration files. The internal state is specified in the [TagActivator] section. As set up in the [SingleActiveCellOccupancy] section, it features a oxygen indicator charge (set in the [OxygenIndicator] section). The oxygen-indicator charge is non-zero only for the oxygens. In consequence, the oxygen cell system (defined in the [OxygenCell] section) tracks only oxygens. This reproduces published data (see Fig. 17, 3 ). Nevertheless, this configuration file does not scale up easily with system size. Molecular Coulomb cell-veto, Lennard-Jones inverted The configuration file water/coulomb cell veto lj inverted.ini for the water system corresponds to the same factors as in Section 5.3.2. As a preliminary step towards the treatment of all long-range interactions with the cell-veto algorithm, in Section 5.3.4, molecular Coulomb factors are realized here (for non-excluded cells of the active cell) with a cell-veto event handler. Taggers and their tags are listed in the [TagActivator] section, and they are generally similar to those of other configuration files. In addition, the internal state for the Coulomb system is defined in the [TagActivator] section and further described in the [SingleActiveCellOccupancy] section. The latter describes the cell level (which serves for the water molecules) as on the root node level (cell level = 1), the barycenter of the leaf-node positions of each water molecule. (Root-node and leaf-node positions are set in the random input handler, which itself uses a water random node creator.) The event handlers consistently update all leaf-node positions and rootnode positions from a valid initial configuration obtained in an instance of the WaterRandomNodeCreator class. Consistency will be deteriorated over long runs, but this is of little importance for the simple example case presented here. The configuration file reproduces published data (see Fig. 17, 4 ). Molecular Coulomb cell-veto, Lennard-Jones cell-veto The configuration file water/coulomb cell veto lj cell veto.ini offers no new factors compared to Sections 5.3.2 and 5.3.3, but it uses, for illustration purposes, two cell-occupancy systems and two cell-veto event handlers. As nearby and surplus particles are excluded from the cell-veto treatment, this implies two sets of cell-based, nearby, and surplus event handlers in addition to two cell-boundary event handlers. For the molecular Coulomb factors, the cell-veto event handler receives as a factor potential the sum of pairwise merged-image Coulomb potentials with attractive and repulsive charge combinations. Cells track the barycenter of individual water molecules, and consistency between root-node units and leaf-node units is of importance. Although the Lennard-Jones potential can be inverted, the configuration file sets up a second celloccupancy system for the Lennard-Jones potential. The cell-occupancy system tracks only leaf-node particles that correspond to oxygen atoms. Taggers and their tags are listed in the [TagActivator] section. This section is of interest as it sets up the internal state as two cell-occupancy systems, both instances of the same SingleActiveCellOccupancy class. They require different parameters, and are therefore presented under aliases, in the [OxygenCell] and [MoleculeCell] sections. Each of theses cell-occupancy systems use a separate cell system instance of the same class. As the two cell systems have the same parameters, they do not need to be aliased in the configuration file. The configuration file reproduces published data (see Fig. 17, 5 ). Licence, GitHub repository, Python version JF, the Python application described in this paper, is an open-source software project that grants users the rights to study and execute, modify and distribute the code. Modifications can be fed back into the project. Conclusions, outlook As presented in this paper, JF is a computer application for ECMC simulations that is hoped to become useful for researchers in different fields of computational science. The JF-V1.0 constitutes its first development milestone: built on the mediator design pattern, it systematically formulates the entire ECMC time evolution in terms of events, from the start-of-run to the end-of-run, including sampling, restarts (that is, end-of-chain), and the factor events. A number of configuration files validate JF-V1.0 against published test cases for long-range interacting systems [9]. For JF-V1.0, consistency has been the main concern, and code has not yet been optimized. Also, the handling of exceptions remains rudimentary, although this is not a problem for the cookbook examples of Section 5. All the methods are written in Python. Considerable speed-up can certainly be obtained by rewriting time-consuming parts of the application in compiled languages, in particular of the potential package. One of the principal limitations of JF-V1.0 is that pseudo-factor-related and unconfirmed events are timesliced, leading to superfluous trashing and re-activation of candidate events. Optimized bounding potentials for many-particle factor potentials appear also as a priority. The consistent implementation of an arbitrary number n ac of simultaneously active particles is straightforward, although it has also not been implemented fully in JF-V1.0. (As mentioned, this is planned for JF (Version 2.0)). This will enable full parallel implementations on multiprocessor machines. Simplified parallel implementations for one-dimensional systems and for hard-disk models in two dimensions are currently being prototyped. The parallel computation of candidate events (using the MultiProcessMediator class implemented in JF-V1.0) is at present rather slow. Bringing the full power of parallelization and of multi-process ECMC to real-world applications appears as its outstanding challenge for JF.
22,874.6
2019-07-29T00:00:00.000
[ "Computer Science", "Physics", "Chemistry" ]
Competing risk and heterogeneity of treatment effect in clinical trials It has been demonstrated that patients enrolled in clinical trials frequently have a large degree of variation in their baseline risk for the outcome of interest. Thus, some have suggested that clinical trial results should routinely be stratified by outcome risk using risk models, since the summary results may otherwise be misleading. However, variation in competing risk is another dimension of risk heterogeneity that may also underlie treatment effect heterogeneity. Understanding the effects of competing risk heterogeneity may be especially important for pragmatic comparative effectiveness trials, which seek to include traditionally excluded patients, such as the elderly or complex patients with multiple comorbidities. Indeed, the observed effect of an intervention is dependent on the ratio of outcome risk to competing risk, and these risks – which may or may not be correlated – may vary considerably in patients enrolled in a trial. Further, the effects of competing risk on treatment effect heterogeneity can be amplified by even a small degree of treatment related harm. Stratification of trial results along both the competing and the outcome risk dimensions may be necessary if pragmatic comparative effectiveness trials are to provide the clinically useful information their advocates intend. Introduction Recent commentaries have highlighted several fundamental limitations of clinical trials in providing an evidencebase for medical practice. It has been pointed out that many patients seen in routine clinical practice, particularly older and complex patients with multiple comorbid conditions, are excluded from clinical trials [1,2]. To address this, there has been a call for pragmatic comparative effectiveness trials with broader inclusion criteria, with the goal of enrolling a diverse patient population more representative of patients seen in routine clinical practice [3]. However, other commentaries have highlighted another limitation in clinical trials: substantial treatment-effect heterogeneity within trials often makes the overall summary result difficult to interpret and apply [4][5][6][7]. Enrolling a greater diversity of patients will increase this within trial heterogeneity. Thus, while some argue for broader inclusion criteria to make results more "generalizable", increasing patient heterogeneity yields overall results that are more likely to be uninformative or even misleading. While a consensus has yet to fully emerge on how best to deal with treatment-effect heterogeneity, the limitations of conventional subgroup analyses are well-appreciated. Since patients have multiple attributes that might affect the risks and benefits of an intervention -they are male or female, young or old, with or without diabetes, have a high or low blood pressure, blood count, cholesterol, urinary protein excretion, ejection fraction, etc. -it is statistically impractical to consider each potentially important attribute in a one at a time manner [6,7]. It has therefore been suggested that patient characteristic be combined by risk models that describe fundamental dimensions of risk likely to underpin treatment-effect heterogeneity [6,[9][10][11]. Prior work has demonstrated that variation in outcomerisk (i.e. a patient's baseline risk of having the outcome of interest) is a fundamental determinant of the opportunity for treatment benefit, and of the risk-benefit trade-offs when there is any treatment-related harm [6,[9][10][11]. Because variation in outcome-risk among patients enrolled in clinical trials is ubiquitous, frequently large and typically skewed, a relatively small subgroup of highrisk patients often account for most trial outcomes and have a disproportionate influence on overall trial results [12]. Indeed, the summary result of a clinical trial might not even accurately reflect the tested intervention's treatment-effect in a typical patient within the trial [6,12]. Because of this, and because outcome-risk variation can often be well-described with a simple multivariate risk model, routine stratification of trial results by outcomerisk has been recommended [6,[9][10][11]13]. In addition to outcome-risk, it is also recognized that, for treatments with serious and non-rare adverse effects (e.g. surgery or fibrinolytic therapy), individual patient variation in vulnerability to treatment-related harm can give rise to important treatment-effect heterogeneity; thus, it may in some circumstances be appropriate to stratify patients based on their risk of treatment-related harm [6,9,14,15]. However, another dimension of risk heterogeneity from which clinically significant differences in treatment-effect may emerge is relatively neglected and may be of particular importance for comparative effectiveness trials: variation in competing risk. Competing risk is the risk of an event that interferes with the probability of experiencing the disease-specific outcome of interest [16]. It is not merely a statistical issue affecting Kaplan-Meier [16] or sample size [17] estimates, but a clinical issue especially important when considering treatments in older or complex patients with multiple comorbidities for whom competing events may limit the likelihood of treatment benefit. This paper considers how -even for treatments with uniform treatment efficacy-understanding the complex interplay between baseline risk, treatment-related harm and competing risk is important in making good individual-patient recommendations and decisions, and how analyzing the effects of competing and outcome risks in clinical trials -normally obscured by overall trial results -may better inform clinical decision-making. When Competing Risk is Uncorrelated with Outcome-Risk To illustrate the interplay between outcome-risk and competing risk, consider the use of adjuvant chemotherapy for breast cancer. Adjuvant chemotherapy reduces the risk of breast cancer death in both node positive and node negative cancers [18]. Since the treatment carries a non-trivial risk of serious complications, evidence-based guidelines often strongly recommend that the patient's risk of cancer recurrence and death (based upon cancer grade and stage) play an important role in determining who should receive chemotherapy [19]. However, breast cancer frequently behaves like a chronic disease, with events occurring over a decade or more, and there is tremendous variation in the risk of both breast cancer and non-breast cancer death across the breast cancer population. Table 1 shows how the benefit of chemotherapy will vary according to both the baseline risk from the cancer itself and the degree of competing risk for mortality, even assuming a constant relative effect of treatment for all patients (relative risk reduction = 15%) and a constant absolute rate of serious treatment-related harm (15 events over 10 years per 1000 people treated). Based on these assumptions, and estimated survival rates for stages 1, 2 and 3 breast cancer [20], consistent with our previous work [6,9,10], those who are at very high risk of dying from the disease usually benefit substantially despite the risks of treatment-related harm, and not surprisingly, even when there is substantial competing risk. This is because when the baseline breast cancer risk is high the diseasespecific risk overwhelms competing risks from comorbid illnesses, and an efficacious treatment produces a large amount of absolute benefit, far outweighing the small risk of treatment-related harm (see Table 1). For patients with a more favorable prognosis, however, the absolute amount of benefit of the same treatment is much more modest, such that treatment-related harm and competing risk can greatly attenuate or reverse the net benefits of treatment. For example, a patient with a nontrivial 10% percent breast cancer mortality risk would appear to be a good candidate in the absence of other risks (number needed to treat [NNT] = 67); however, a small treatment-related risk would nullify their potential benefit and the presence of relatively modest competing risks would cause the treatment to result in net harm. Even for patients with a substantial 25% risk of breast cancer death in the absence of competing risks, a high rate of competing risk results in a greatly attenuated treatment-effect; the NNT increases from 44 (with no competing risks) to 267 (with a 50% 10-year competing risk of mortality). If patients with high competing risk also had twice the normal risk of treatment-related harm (and the risk of treatment-related harm is often influence by comorbid illness) then chemotherapy would result in net harm (number needed to harm [NNH] = 91). Note that a 10-year competing risk of mortality of 50% is not extreme for breast cancer. Approximately a third of breast cancer patients are over the age of 70. The 10-year risk of competing mortality would be approximately 50% for a 70 year old at only slightly higher than average risk [21]. Indeed, a median age breast cancer patient (age 61) with asymptomatic class I CHF would also have approximately a 50% 10-year risk of competing mortality [22]. Table 1 demonstrates that the overall measured effectiveness of adjuvant chemotherapy in a clinical trial depends on the distribution of the competing and outcome risks of trial enrollees. By excluding older patient or those with comorbidities, [23,24] oncology trials enhance their likelihood of detecting treatment benefit, but their results are directly applicable only to patients with low competing risks. While enrolling older patients, and patients with multiple comorbidities would attenuate the treatmenteffect in the summary results, it would still not provide the clinically useful knowledge about who to treat unless analyses included risk-based stratification, incorporating both competing and outcome risk. When Competing Risk is Correlated with Outcome-Risk While the presence or absence of comorbidities should not substantially alter the likelihood of a more aggressive versus a more indolent form of cancer, in many circumstances, competing risk can be highly correlated with the disease-specific outcome-risk. For example, an implantable cardiac defibrillator (ICD) would be of most benefit in patients with a high risk for sudden cardiac death (SCD) but little risk for death from other causes [25], since implanting these devices (costly and not risk-free) in patients destined to die from non-arrhythmic causes is highly undesirable. However, the criteria used to identify eligible patients at high risk for sudden cardiac death (SCD), namely a left ventricular ejection fraction of 35% or less, also identifies patients at risk of cardiac death from pump failure [26,27]. Separating these risks has proven difficult, as factors that predict mortality from SCD also usually predict non-SCD mortality [28]. The Seattle Heart Failure Model (SHFM), developed on a database of pooled clinical trials consisting of 10,538 ambulatory patients with heart failure [29], predicts total mortality in patients with congestive heart failure based on easily obtainable clinical variables. Both SCD and pump failure death substantially increase across risk strata [30]. However, the ratio between SCD and pump failure death dramatically decrease in higher risk compared to lower risk patients; low risk patients with risk scores of zero have roughly a 7 to 1 SCD to pump failure death ratio, while this ratio was 1 to 2 in patients with risk scores of 4 [30]. As Figure 1 demonstrates, these different ratios across risk strata can have important effects on the measured effectiveness of ICDs. As mortality risk increases, the relative risk reduction of ICDs dramatically decreases. This is because the relative risk reduction is inversely related to the ratio of preventable disease-specific (SCD) to non-preventable competing risk of mortality. More surprisingly, the absolute risk reduction across risk strata is described by a non-linear, inverted U-shaped function (Figure 1b). Intermediate risk patients are most likely to benefit, as low risk patients are unlikely to have an arrhythmic death even in the absence of treatment while benefit in the highest risk group is limited by the high rate of non-arrhythmic death. Folding in treatment-related harm could further amplify this treatment-effect heterogeneity, particularly if sicker patients were more prone to ICD-induced adverse events. Indeed, the phenomena depicted in Figure 1 is consistent with a risk-stratified analysis of the Multicenter Automatic Defibrillator Implantation Trial (MADIT)-II [31,32]. While overall MADIT-II found a 31% reduction in allcause two-year mortality associated with the ICD [31], the ICD did not appear to reduce all-cause mortality in either the very low risk group (defined as no more than 1 of multiple risk factors) or a very high risk group of patients (defined as those with blood urea nitrogen ≥ 50 mg/dL or serum creatinine ≥ 2.5 mg/dL) [32]. While the intermediate mortality risk group may represent the "sweet spot" for ICD therapy, more efficient targeting of these devices could presumably be achieved if risk factors for SCD that do not predict pump failure could be identified. Again, summary trial results would be quite dependent on the risk profile of the enrolled patients, and may be misleading in some circumstances. Summary As seen in the clinical conditions above, variation in competing risk can cause variation in treatment-effect. Even for treatments with a constant efficacy, the apparent relative risk reduction of an intervention is directly related to the ratio between disease-specific and competing risk. When these two risk dimensions are correlated, increasing outcome-risk may not uniformly lead to increasing benefit -particularly when the primary outcome is a combination of disease-specific and competing events. Even when outcome and competing risks are not correlated, understanding how to treat individual patients can depend on the interplay between competing and outcome risks, and these effects can be greatly amplified by even small amounts of treatment-related harm, especially when those with higher competing risk are also at greater risk of harm from treatment. Calls for large simple clinical trials [33] with broad inclusion criteria, including older or complex patients, designed to provide results generalizable to "real-world" patients [3], have generally ignored the fact that reporting a summary treatment-effect based on the arithmetic mean across all patients may at times be misleading. The application of the results of meta-analysis without a careful consideration of clinical heterogeneity is also problematic [34]. Since treatment benefit depends on the ratio between competing and outcome risks, it may be necessary to stratify these real world effectiveness trials along these two important risk dimensions, as in Table 1, or to account for these risks using appropriate multivariable models. Some might point out that thoughtful and experienced clinicians attempt to do this in clinical practice, such that overall clinical trial results can still be customized at the bedside. Doctors specializing in prostate cancer, for example, are known to apply the so-called 10-year rule [35], an implicit assessment based on age and co-morbidity, whereby aggressive therapy might be offered to patients Figure 1 Relative and Absolute Benefits for Implantable Cardiac Defibrillators (ICDs) Stratified by Total Mortality Risk: These graphs show the relative (A) and absolute (B) benefit for ICDs assuming that the devices are 75% effective in preventing sudden cardiac death (SCD) but not at all effective in preventing pump failure. The risk ratio of SCD to pump failure death was empirically observed [30]. Note that both relative risk reduction decreases monotonically. Absolute risk reduction demonstrates a U-shaped benefit; benefit is low in low risk groups whose risk of SCD is low and in high risk groups who are suffer pump failure. likely to live long enough to benefit. However, physicians in general (and oncologists specifically) are prognostically inaccurate and systematically over-optimistic, when estimating overall life expectancy [36,37]. In addition to being inaccurate, implicit clinical judgment is an inadequate basis for clinical practice in an era where decisions are expected to conform to guidelines and will be evaluated based on performance measures. While tools which help formally assess comorbidities and competing risks may be helpful [38], considering and comparing patientspecific competing risks to patient-specific disease-specific risks adds a dimension of complexity likely to render simple bedside heuristics inadequate since these may be determined by different or similar factors, and particularly where the time course for benefits and harms of therapies can vary. Further, when results of clinical trials themselves are aggregated across patients with greatly varying diseasespecific and competing risks, the underlying treatmenteffect that should be incorporated in to any decisional framework at the bedside may be totally obscure. Conclusion In order to build an evidence-base that can support guidelines for patients who have multiple diseases simultaneously, relaxing clinical trial eligibility criteria to include older and complex patient must be accompanied by analyses that examine how a treatment's net benefit varies by an individual's disease-specific risk, chance of treatmentrelated harm, and competing risks. Research is needed to develop and test reliable ways to capture competing risk for different conditions [35,39], to develop sound methodologies to examine treatment-effects across multiple dimensions of risk, to develop a consensus to standardize analytic approaches and to identify which circumstances and clinical conditions these more complex analytic approaches might be justified and necessary.
3,721.2
2008-05-22T00:00:00.000
[ "Economics", "Medicine" ]
Weddell Sea Polynya analysis using SMOS-SMAP Sea Ice Thickness Retrieval The Weddell Sea Polynya is an anomalous large opening in the Antarctic sea ice above the Maud Rise seamount. After 40 years of absence, it fully opened again on 13 September, 2017, and lasted until melt; staying open for a total of 80 days. 2017, however, actually was not the only year the imprint of the polynya could be identified. By investigating sea ice thickness (SIT) data retrieved from the satellite microwave sensors Soil Moisture Ocean Salinity (SMOS) and Soil Moisture Active Passive 5 (SMAP), we have isolated an anomaly of thin sea ice spanning an area comparable to the polynya of 2017 over Maud Rise occurring in September 2018. In this paper, we look at sea ice above Maud Rise in August and September of 2017 and 2018 as well as all years from 2010 until 2020 in a 11-year time series. Using the ERA5 surface wind reanalysis data, we present the strong impact storm activity has on sea ice and help consolidate the theory that the Weddell Sea Polynya, in addition to oceanographic effects, is subject to direct atmospheric forcing. Based on the results presented we propose that the Weddell 10 Sea Polynya, rather than being a binary system with one principal cause, is a dynamic process caused by various different preconditioning factors that must occur simultaneously for it to occur. Moreover, we show that rather than an abrupt stop to anomalous activity atop Maud Rise in 2017, the very next year shows signs of polynya-favourable activity that, although insufficient, was present in the region. This effect, as will be shown in the 11-year SMOS record, is not unique to 2018 and similar anomalies are identified in 2010, 2013 and 2014. It is demonstrated that L-band microwave radiometry from the SMOS 15 and SMAP satellites can provide additional useful information, which helps to better understand dynamic sea ice processes like polynya events, in comparison to if satellite sea ice concentration products would be used alone. the given uncertainty of the product, meaning 30%. However, the retrieval does not take into account subtle differences that distinguish the two polar environments. Nevertheless, recently research done on Antarctic phenomena have made use of the SMOS SIT retrieval (e.g., Shi et al., 2021), and more specifically, SMOS SIT retrieval has been used for studying Antarctic polynya (e.g., Heuzé and Aldenhoff, 2018;Mohrmann et al., 2021). Sea ice concentration (SIC) data (Section 2.2) is necessary to further validate and distinguish SIT data. The SMOS-SMAP 100 retrieval algorithm assumes near-100% SIC when retrieving SIT and since we look at a region prone to polynya and low SIC (Lindsay et al., 2004), it is necessary to consider this factor. The SMOS-SMAP SIT retrieval has no SIC dataset correction implemented because uncertainty of SIC algorithms at high concentration and their covariation at thin thicknesses will cause high errors (Paţilea et al., 2019). Using SIC maps and data in combination with SIT counterparts, we can better infer the location and degree of error in our SIT retrieval. Paţilea et al. (2019) mention specific examples and ratios for the retrieval 105 like sea ice concentration of 90% at 10 cm ice thickness for which the retrieved sea ice thickness is 8.5 cm. Meanwhile, 50 cm ice thickness at 90% sea ice concentration is just 28 cm. Conclusively, all sea ice concentration algorithms show less than 100% SIC for thicknesses below 30 cm (Paţilea et al., 2019). Thus, thin ice thickness data shown in this study should rather be interpreted as a combined ice area and thickness anomaly and not be used to calculate the actual ice volume for the polynya area. However, when the polynya opens, the large heat loss from the ocean often causes thin sea ice to grow, which soon shows 110 up as 100% SIC but will be correctly shown as large-scale thin ice area in the SMOS-SMAP dataset. ASI Ice Concentration Algorithm The ARTIST Sea Ice (ASI) algorithm retrieves SIC from the difference between brightness temperatures at 89 GHz at vertical and horizontal polarizations. This polarization difference is then converted into SIC using pre-determined fixed values for 0% and 100% SIC polarization differences known as tie points. It is known from surface measurements that the polarization 115 difference of the emissivity near 90 GHz is similar for all ice types and much smaller than for open water (Spreen et al., 2008). At such high frequency, atmospheric influence is high also. This effect is dealt with in a bulk correction for atmospheric opacity and by implemented weather filters over open water. Because the Bootstrap (BBA) (Comiso et al., 1997) algorithm uses the 18 and 37 GHz channels, which are less sensitive to atmospheric phenomena, it is also used to essentially filter the produced ASI SIC concentration by setting SIC to zero where the Bootstrap algorithm retrieves less than 5% SIC. ERA5 Climate Reanalysis ERA5 Climate Reanalysis data is used to study direct atmospheric forcing on the opening of the polynya as well as on anomalous regional sea ice thinning to conclusively answer whether the Weddell Sea Polynya is purely ocean-driven or maintained by a combination of both processes. from 1950 onwards. It replaces the ERA-Interim reanalysis (spanning 1979 onwards) and is based on the Integrated Forecasting System (IFS) Cy41r2. ERA5 benefits from a decade of developments in model physics, core dynamics and data assimilation (Hersbach et al., 2020). In addition to a significantly enhanced horizontal resolution of 31 km, compared to 80 km for ERA-Interim, ERA5 has hourly output throughout, and an uncertainty estimate from a 10-member ensemble of data assimilations with 3-hourly output. mean absolute deviation = 2.2 hPa; mean bias = 0.8 hPa). On these grounds, ERA-I was deemed accurate for gathering signs of storm activity as it skillfully represented MSLP variability near Maud Rise. ERA5 is a reanalysis with a higher temporal and spatial resolution than ERA-I. It improves upon its predecessor in terms of information on variation in quality over space 135 and time as well as an improved troposphere modelling. As a result, for the purposes of this study, it should offer a better, or at least identical, assessment of the wind speeds near Maud Rise that are going to be cross-referenced with the presented SIT retrievals in this study. (Fig. 1). Here the advantage of the SMOS-SMAP ice thickness retrieval shows its strength compared to the traditional sea ice concentration datasets. This section presents findings that suggest a previously unrecognized similarity between the two September anomalies. year has been shown via satellite imagery; most commonly via SIC retrieval (e.g., Campbell et al., 2019). Here the advantages of SMOS-SMAP SIT retrieval are limited by the high open water fraction but nevertheless help to demonstrate the full extent of the anomaly that SIC maps of the region can only partially depict (see Fig. 3). prevailed that year. Atmospheric data (Fig 4a) in the form of wind speed derived from u and v components of wind velocity 160 vectors are presented as daily average (in blue) and maximum (in red) magnitude in the region of interest. Notably we can see the highest mean (9-10 Sep) at the start of the SIT anomaly and the highest maximum at the same time as the peak of the anomaly (17-18 Sep). Thus wind could have contributed to the formation of the 2018 "polynya thin ice" event. Interpreting the wind speed results shown in Fig. 5a as compared to the lower polynya area and thickness plots, we see that the 165 highest maximum (in red) and mean (in blue) wind speed magnitude both coincide with the 13 September polynya opening date. From the ASI SIC record (Fig. 5b), we can see both the similarities it shares with the SIT record (Fig. 5c) as well as clear differences that will be further discussed below. Important to note is that the blue line in both SIC and SIT records represents the area that is classified as open water. These lines are also present in the 2018 Fig. 4b and 4c but are consistently at 0 km 2 and therefore hidden because of the overlap with low SIC and low SIT lines. 175 The polynya maps in Fig. 2 and Fig. 3 are useful for accessing fine details of low SIT distributions as well as comparing SIT retrieval with ASI SIC (Fig. 3). By capturing the low sea ice thickness anomaly in 2018 and at the beginning of the 2017 polynya event in the SIT record we can infer that there were residual polynya-favourable effects that produced a forcing that was insufficient to open the polynya but sufficient to still impact the overlying sea ice. This is similar to the 1970s polynya cases, where the 1973 polynya resulted in a much larger iteration of the Weddell Sea Polynya visible from 1974 to 1976. Cheon 185 In order to analyse the time periods during which the polynya of 2017 (Fig. 5) and the sea ice anomaly of 2018 (Fig. 4) occurred, we view the respective time series. 2017 in Fig. 5c shows a progression of events in terms of SIT of how the polynya came to be. First and foremost we have a major regional ice thinning early August that peaks on the fourth of August much like the minor Weddell Sea Polynya of 2016 that also peaked on 4-5 August of that year (see Fig. A1 in the appendix). Looking at 5b we can see how much smaller the area affected by SIC variations is and how it is different in behaviour to the SIT time series. 190 Only the "below 80%" SIC shows some variability, however, not very correlated to the SIT time series. This is especially true during the brief period (6-12 Sep) leading up to the polynya, which is promising because it suggests a lack of low SIC-induced SIT values due to the SMOS-SMAP SIT retrieval full ice cover assumption. In total, compared to the 50 · 10 3 km 2 of below 100% SIC area, less than 50 cm thick ice spans over 300 · 10 3 km 2 of the region of interest. Following the period mentioned (6-12 Sep), we have the sudden peak (12-13 Sep) in both lower sea ice concentrations and thin sea ice. Based on Fig. 3, we see 195 that this, at first minor opening in sea ice, paved way to the Weddell Sea Polynya. From an oceanographic point of view this would imply heat exchange with the atmosphere which would cool the surface water layer and destabilize the water column. This destabilization, further facilitated by the effect of the Taylor column, isolates the water mass above Maud Rise (Muench et al., 2001). The lack of stratification in the waters surrounding the Antarctic continent, would trigger convection cells able to bring up warm Deep Water from below. In Fig. 3 we can see the much larger scale effect this is having on SIT rather than 200 SIC and how peaks of low SIC and low SIT do not coincide in Fig. 5b and Fig. 5c, respectively. Instead, we see the low SIT area peak occurring 5-6 September following the 4 September peak in low SIC area which is what we expect considering how the convection cell would not simply cease immediately after the the smaller openings in ice freeze up; but rather its effects would be "felt" in the general location for days to come. With the ocean destabilized, coupled with heavy storm activity as can be seen in Fig. 5a Fig. 4 shows that 2018 is less anomalous than 2017 for the first one and half months until the sea ice anomaly begins to form on the 6 of September 2018. There is an initial thinning and occasional sporadic "below 80%" SIC events distributed throughout the period. Notably, the event on 24 August and 31 August, seen in Fig. 4b seem to suggest lead openings in thick pack ice as there is no thinning recorded in the SIT retrieval for those days. The sea ice anomaly itself, as can be seen in Fig. 4c, is very well defined in the SIT record and has a clear beginning and an end. Notable is that the two consecutive low SIT area peaks are 210 characterised by more extreme case of thinning during the first smaller peak reaching a prolonged period (7-13 September) of ice thinner than 20 cm followed by a much larger area of ice thinner than 50 cm (15)(16)(17)(18)(19)(20)(21). This anomaly follows a period of relatively strong mean and maximum wind speed from 3 August to 13 Sep towards the East and Southeast directions ( Fig. 4) that could imply that wind-driven ice advection influences the sea ice anomaly as any attempt at refreezing ice that has been broken apart by wind would require newly formed thin ice. Similarly, low SIC and strong winds would enhance heat loss 215 from the ocean and cause upwelling warm water, which would melt the ice from below. Fig. 7 depicts hourly wind conditions during the start of the sea ice anomaly on 7 September 2018: the strong westerly winds (blowing towards the East) common for this region occasionally show a more northerly component roughly where the sea ice anomaly began to form at the same time. Through the comparison of our SIC data with ERA5 atmospheric data we can infer when wind can force the Weddell Sea forcing is a strong contributing factor especially towards the start of the polynya. Thereafter also oceanic upwelling of warm water due to the reduced stratification plays an increasing role. Lastly, we use the SMOS SIT retrieval instead of the combined SMOS-SMAP to analyze years before 2015 (the year when SMAP was put into orbit) to make a consistent 11 year SIT time series over the months of July, August, September and October ( Fig. 1) to fully include the freezing periods of the relevant region over the years. Notably, the sea ice thinning of 2018 is by 230 are periods of near-100% SIC and low SIT as during pre-polynya periods. When the polynya is open, the SIT signal from the retrieval is unlikely to provide accurate ice thickness data due to large areas of open water influencing the signal. As mentioned before, low SIC affects the SIT record. Other sources of uncertainty can be flooded ice and slush caused by snow pushing 275 down the sea ice such that water floods from the sides or from below through cracks in the sea ice. However, we do not have indication that this happened here. Due to the potential uncertainties in this study the SIT record serves mainly as an indicator of anomalous sea ice activity rather than a means by which to quantify the exact degree of thinning in the region. In 2018, a polynya-free year, SIT retrieval has shown that the beginning and end of a sea ice anomaly that, at its peak (18 Sep: <50 cm sea ice region with an area of 300 · 10 3 km 2 ), reached an estimated area larger than the United Kingdom. It is apparent 280 that the low SIT anomaly covered a much wider area than where low SIC (most likely minor lead openings) is recorded. As such, SMOS-SMAP SIT analysis is a method by which the Maud Rise region can be better monitored on a more frequent basis. This type of analysis, able to detect anomalous activity above Maud Rise with high temporal resolution, paves way to a better understanding of the underlying processes that not only drive the polynya but are in fact affecting the sea ice more often than previously thought possible. An extension of the 11-years SMOS time series is needed to better quantify the regularity 285 and how often such polynya-type ice anomaly events occur. As both SMOS and SMAP are science missions with no planned follow ups there is a chance that we will have a gap in the current L-band radiometry capability in space. However, with the future, operational Copernicus CIMR mission (planned launch 2028; https://cimr.eu/) some continuation of the SIT time series will be possible. In conclusion, the classification of the Weddell Sea Polynya as a purely-open ocean polynya has been challenged and clear 290 links between wind speed magnitude and polynya conditioning have been found. As for SIT retrieval from L-band microwave radiometers like SMOS ad SMAP: it is an effective tool at monitoring sea ice conditions above Maud Rise and capable of collecting more substantial information than its SIC counterpart. Rather than substitute SIC retrieval though, the two should be used in conjunction with one another to aid the scientific understanding of the processes taking place and it should be added as yet another tool at trying to understand the unique and complex processes present in the Maud Rise region. Appendix A A1 The 2016 Polynya Event In Fig. A1 we show the 2016 polynya in the same format as the 2017 polynya and 2018 ice thinning anomaly.
3,963.4
2021-06-17T00:00:00.000
[ "Environmental Science", "Geology" ]
The Impact of Accessibility of Urban Central Parks on Housing Prices of Fuzhou The paper analysed the impact of accessibility of urban parks within the third ring on housing prices of Fuzhou through network analysis and SPSS correlation analysis based on data of sources like remote-sensing image, web crawling and urban road network, discovering that: (1)The density of residence within the third ring of Fuzhou decreases from the centre to the edges; (2) The housing price, ranging between 18000 and 28000 RMB/ m2, peaks in the city centre; (3) For houses within the third ring of Fuzhou, Gulou District enjoys the greatest access to urban parks while Cangshan District is the poorest in this regard. (4) The residence within the third ring of Fuzhou could be rated as A or A- in terms of access to urban parks, with an overall excellence performance; (5) The walking distance to the parks is significantly correlated with the housing price. The shorter the distance, the higher the price. Regarding this, the paper proposed the following suggestions: (1) Revise the routes to Gaogai Mountain Park by increasing entrances and exists to improve its accessibility; (2) Improve the transportation network and increase footpaths to the park, thus shortening the distance between the park and the surrounding residences. Introduction Since the reform and opening-up, China has been witnessing a rapid rate of urbanization, which has reached 59.58% by the end of 2018, a level well above the average level in the world [1] . Urban public facilities are challenged by increasing urban population, people's growing needs to improve living qualities and surroundings. As a major public facility, the wellmanaged urban central parks usually covers a wide area of grassland with abundant resources, providing a perfect place for outdoor activities [2] . Accessibility usually refers to the level of efforts that people need to make to get to a specific destination, which is measured by distance, time consumption and expenses [3] . The accessibility of central parks has a big impact on the life of urban dwellers. By studing literatures home and abroad, scholars are focusing on the social effects of the accessibility of urban parks [4] , the relationship between grassland and population, social fairness and services [5] , comparision of the accessibility of different types of gransslands [6,7] , and how to measure the accessibility to grasslands in real scenarios [8,9] . Coombes et al. explored the connection between residents' physical health and the accessibility of green space [10] ; Dai analyzed differences among groups in terms of accessibility of green space [11] ; Wendel et al. learned about urban residents' preferences on green space and their difficulties in transportation through sample survey and semi-structured interviews to find out the accessibility and utilization of urban green space in the midst of rapid urbanization [12] . Coutts et al. studied the impact of accessibility of urban green space on health [13] .The major approaches to measure the accessibility of green space include: buffer analysis, minimum distance calculation, cost-distance calculation, gravity model [14,15] and Gaussian two-step floating catchment area [16] . There have been few studies on the impact of accessibility of urban central parks on housing prices through network analysis. Researchers believed that the road-based network analysis could better reflect the process of how residents access the park, providing more accurate and objective analysis on the accessibility of parks [17] . Therefore, the paper adopted network analysis based on data from web crawling and Fuzhou urban parks to study the accessibility of urban parks on housing prices. By transforming the study into quantitative data, it provides references for future planning on urban residence, which is important to people's well-being. Survey Region The area within the third ring of Fuzhou covers the Cangshan Distrcit, Gulou District, Ji'an District, Taijiang District, making it the most populous area with abundant public facilities such as parks and transportation network [18][19] . For this reason, it is selected as the region for survey. Source of data The data is obtained from sources including Anjuke POI, park survey and urban transportation network: (1) Information on the housing price, name of the community as well as the longitude and latitude of the houses for sale within the third ring of Fuzhou were crawled on the website of Anjuke, followed by correction of the data. First, exclude error data by statistics analysis; Next, data on the region from 58.com, ganji.com and fang.com was crawled for analysis while the missing data was supplemented. Finally, the data was imported into ArcGis10.5 for spatial correction by highresolution remote sensing image and Baidu online map, where the wrong data was deleted and the missing data was completed. (2) Data on a total of 55 parks was obtained from the Website of Fuzhou Garden, through which 22 parks covering more than 10hm2 were selected based on the Urban Green Space Classification Standard (CJJ/T 85-2017). The visitors flow rate was accurately recorded by field handled GPS. (3) The high-resolution remote sensing images of 2019 were obtained and imported into ArcGIS10.5 for artificial interpretation based on the overall urban plan of Fuzhou between 2011 and 2020, through which the road information was obtained. Meanwhile, the location of the parks was imported for data supplement. Construction of network analysis model based on database The park and road network database are established based on the requirements of ArcGIS network analyst module, consisting of the centres, links, nodes and impedance that reflect the influence of urban green space (centre) along the traffic network under a certain resistance value (time, distance, cost, etc.). The "New Network Data Set" of the ArcGIS was used to construct the spatial resistance model for network analysis, followed by the establishment of the service analysis layer, where the shortest route from the facility point to the event point along the road network was generated by setting the resistance value of road time (road length/passage speed) [20] . The park database includes the name, coverage, entrance and other elements of the park, with the entry point of the park serving as the facility point of the network. The database of road network includes the route, name, classification, traffic speed and traffic time of the road. The road network line is established with the central line of the road as the base, the intersection of the breakline as the junction node to simulate the time consumption of traffic at the crossroads of the road network. For residents mainly travelling on foot, their walking speed is set at 5km/h [21] .A waiting time of 0.5 min is set at the road node [22] Calculation of the accessibility of parks Based on the nearest facility point tool in ArcGIS 10.5 Network Analysis, the park is set as the facility point and residential houses are taken as the event point to calculate the shortest path between the park and residence, which is considered the quantitative index measuring the shortest path to the park. Correlation analysis between the accessibility of parks and housing prices SPSS software was used to analyse the correlation between the shortest path to parks and the housing price in an effort to study the influence of the accessibility of parks on housing price. Location-based housing price variation Based on Figure 1, it is known that the urban residence of Fuzhou is distributed in a circular pattern with the city centre as the core. The density of houses decreases from the first ring to the third ring, which peaks at the east side of Xihu Park in Gulou District, probably because the area has a smaller coverage and greater population compared with other administrative districts. The Gulou District is followed by Taijiang District, Ji'an District and Cangshan District in terms of population density, while Cangshan District harbors the most land resources within the third ring of Fuzhou. However, schools such as Fujian Agriculture and Forestry University, Fujian Engineering College and Fujian Normal University are located in the district, all occupying a vast area of land. In addition, lands like Feifeng Mountain and Gaogai Mountain are difficult to be exploited. All these factors contribute to the less population and smaller scale of residence distribution of the district. The housing prices within the third ring of Fuzhou averages at RMB 25873.60/m 2 between RMB 10009.38/m 2 and RMB 89227.64/m 2 . Analysis of the housing prices through ArcGIS based on the inverse distance weighting method revealed that the majority of the residential houses are priced between 18000/m 2 and 28000/m 2 while a smaller part of houses is priced between 28000/m 2 and 36000/m 2 , constituting more than half of the survey subjects. The housing prices peaks at the city centre and decreases along the outside rings. Gulou District, as the city centre, is the most economically developed district of Fuzhou, with the highest population and best facilities. For communities, the Jintai Community of Gulou District has the highest housing price at 89227.64/m 2 . Figure 4, it is known that the residential houses within the third ring of Fuzhou are mainly graded as Very good and Good, with an average time consumption on the shortest path of 0.374h, which is considered acceptable. The impact of green space accessibility on housing prices The residential housing price and the corresponding walking time via the shortest path are imported into the SPSS for bivariate correlation analysis, as shown in Table 1, which reveals a Pearson correlation between the walking time and the housing price of -0.205, i.e. the shorter the walking time, the higher the house prices, which were significantly related. Conclusion and Suggestion The paper studied houses within the third ring of Fuzhou to find out the correlation between green space accessibility and housing prices within the third ring of Fuzhou, and concluded that: (1) The residential houses within the third ring of Fuzhou are scattered in a circular pattern, whose density decreases from the first ring to the third ring; (2) The housing prices within the third ring of Fuzhou peaks at the city centre, higher than the outside ring, ranging between 18000 and 28000/m 2 . (3) The Gulou District has the highest grade of accessibility among houses within the third ring of Fuzhou while the Cangshan District was the poorest in this regard; (4) The residential houses within the third ring of Fuzhou are mainly graded as Very good and Good, with an overall high level of accessibility of parks; (5) There is a significant correlation between the walking time to the park and the housing price, the shorter the time consumption, the higher the housing price. Based on the above analysis, the paper proposed the following suggestions for the future urban planning and development of the real estate market, including: (1) Proper plan the entry and exit points of Gaogaishan Park, and add more entrances and exits to improve its accessibility; (2) Improve the urban road network and promote the construction of paths to shorten the distance between the green space and residence. Based on ArcGIS shortest path, the paper proposed a practical approach to measure the impact of park accessibility on housing prices, providing references for the development of real estate market and the construction of an ecologically liveable city.
2,618
2020-01-01T00:00:00.000
[ "Geography", "Economics", "Environmental Science" ]
On the clique number of a strongly regular graph We determine new upper bounds for the clique numbers of strongly regular graphs in terms of their parameters. These bounds improve on the Delsarte bound for infinitely many feasible parameter tuples for strongly regular graphs, including infinitely many parameter tuples that correspond to Paley graphs. Introduction The clique number ω(Γ) of a graph Γ is defined to be the cardinality of a clique of maximum size in Γ. For a k-regular strongly regular graph with smallest eigenvalue s < 0, Delsarte [12,Section 3.3.2] proved that ω(Γ) ⌊1 − k/s⌋; we refer to this bound as the Delsarte bound. Therefore, since one can write s in terms of the parameters of Γ, one can determine the Delsarte bound knowing only the parameters (v, k, λ, µ) of Γ. In this paper we determine new upper bounds for the clique numbers of strongly regular graphs in terms of their parameters. Our bounds improve on the Delsarte bound infinitely often. Let q = p k be a power of a prime p congruent to 1 mod 4. A Paley graph has vertex set equal to the finite field F q , and two vertices a and b are adjacent if and only if a − b is a nonzero square. For a Paley graph Γ on q vertices with k even, Blokhuis [4] showed that ω(Γ) = √ q; this corresponds to equality in the Delsarte bound. Bachoc et al. [1] recently considered the case when Γ is a Paley graph on q vertices with k odd and, for certain such q, showed that ω(Γ) ⌊ √ q − 1⌋. This corresponds to an improvement to the Delsarte bound for these Paley graphs. Here, working much more generally, given a strongly regular graph Γ with parameters (v, k, λ, µ), we provide inequalities in terms of the parameters of Γ that, when satisfied, guarantee that the clique number of Γ is strictly less than the Delsarte bound. We show that these inequalities are satisfied by infinitely many feasible parameters tuples for strongly regular graphs and, in particular, are satisfied by infinitely many parameter tuples that correspond to Paley graphs. Our inequalities are obtained using what we call the "clique adjacency bound" (see Section 4), a bound defined by the second author [17]. We also show that the clique adjacency bound is always at most the Delsarte bound when applied to strongly regular graphs. The paper is organised as follows. In Section 2 we state our main results and in Section 3 we state some standard identities that we will use in our proofs. Section 4 contains the proofs of our main results. In Section 5 we examine the strength of the clique adjacency bound and in Section 6 we provide an illustrative example comparing certain bounds for the clique number of an edge-regular graph that is not necessarily strongly regular. Finally, we give an appendix in which we describe our symbolic computations. Definitions and main results A non-empty k-regular graph on v vertices is called edge-regular if there exists a constant λ such that every pair of adjacent vertices has precisely λ common neighbours. The triple (v, k, λ) is called the parameter tuple of such a graph. A strongly regular graph Γ with parameter tuple (v, k, λ, µ) is defined to be a non-complete edge-regular graph with parameter tuple (v, k, λ) such that every pair of non-adjacent vertices has precisely µ common neighbours. We refer to the elements of the parameter tuple as the parameters of Γ. We call the parameter tuple of a strongly regular graph feasible if its elements satisfy certain nonnegativity and divisibility constraints given by Brouwer [6,VII.11.5]). Let Γ be a strongly regular graph with parameters (v, k, λ, µ). It is well-known that Γ has at most three distinct eigenvalues, and moreover, the eigenvalues can be written in terms of the parameters of Γ (see [14,Section 10.2]). In what follows we denote the eigenvalues of Γ as k > r s. Strongly regular graphs whose parameters satisfy k = (v − 1)/2, λ = (v − 5)/4, and µ = (v − 1)/4 are called type I or conference graphs. Strongly regular graphs all of whose eigenvalues are integers are called type II. Every strongly regular graph is either type I, type II, or both type I and type II (see Cameron and Van Lint [9, Chapter 2]). The fractional part of a real number a ∈ R is defined as frac (a) := a − ⌊a⌋. We are now ready to state our main results. Theorem 2.1. Let Γ be a type-I strongly regular graph with v vertices. Suppose that Let P denote the set of all primes p of the form p = 1 + 4g for some g ∈ N. Then the sequence ( √ p/2) p∈P is uniformly distributed modulo 1 (see Balog [2,Theorem 1]). Therefore, since Paley graphs on p vertices exist for all p ∈ P, Theorem 2.1 is applicable to infinitely many strongly regular graphs. Note that the example in [17] with parameters (65, 32, 15,16) is an example of a (potential) graph satisfying the hypothesis of Theorem 2.1. A graph is called co-connected if its complement is connected. We have the following: Theorem 2.4. Let Γ be a co-connected type-II strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r s. Suppose that Then ω(Γ) ⌊−k/s⌋. Proof. Follows from Theorem 4.1 together with Corollary 4.9 below. Remark 2.5. Currently Brouwer [7] lists the feasible parameter tuples for connected and co-connected strongly regular graphs on up to 1300 vertices. Of these, about 1/8 of the parameter tuples of type-II strongly regular graphs satisfy the hypothesis of Theorem 2.4. By the remark following Corollary 4.9, it follows that Theorem 2.4 can be applied to about 1/4 of the complementary pairs of type-II strongly regular graphs on Brouwer's list. Note that the example in [17] of a strongly regular graph with parameter tuple (144, 39, 6, 12) is an example of a graph satisfying the hypothesis of Theorem 2.4; in fact, in this case, the conclusion of Theorem 2.4 is satisfied with equality. The parameter tuple (88, 27, 6,9) is the first parameter tuple in Brouwer's list to which we can apply Theorem 2.4 and whose corresponding graphs are not yet known to exist (or not exist). Parameters of strongly regular graphs Here we state some well-known properties of strongly regular graphs and their parameters. The first two propositions are standard (see Brouwer and Haemers [8,Chapter 9] or Cameron and Van Lint [9, Chapter 2]). Proposition 3.1. Let Γ be a strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r s. Then Proposition 3.2. Let Γ be a type-I strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r > s. Then The next proposition is a key observation. Let Γ be a strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r s. Proof. If Γ is type I then, by Proposition 3.2, we have k − 2µ = 0. If Γ is type II then, by Proposition 3.1, we have k/s − µ/s = −r and r is an integer. Next, the complement Γ of a strongly regular graph Γ is also a strongly regular graph. This is again a standard result (see Cameron and Van Lint [9, Chapter 2]). Proposition 3.4. Let Γ be a connected and co-connected strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r > s. Then Γ is strongly regular Finally we state some straightforward bounds for the parameters of strongly regular graphs. The clique adjacency polynomial Now we define our main tool, the clique adjacency polynomial. Given an edgeregular graph Γ with parameters (v, k, λ), define the clique adjacency polynomial C Γ (x, y) as The utility of the clique adjacency polynomial follows from [16, Theorem 1.1] (see also [17,Theorem 3.1]), giving: Theorem 4.1. Let Γ be an edge-regular graph with parameters (v, k, λ). Suppose that Γ has a clique of size c 2. Then C Γ (b, c) 0 for all integers b. As discussed in [16] and [17], Theorem 4.1 provides a way of bounding the clique number of an edge-regular graph using only its parameters. Indeed, by Theorem 4.1, for an edge-regular graph Γ and some integer c 2, if there exists an integer b such that C Γ (b, c) < 0 then ω(Γ) c−1. Hence we define the clique adjacency bound (CAB) to be the least integer c 2 such that C Γ (b, c + 1) < 0 for some b ∈ Z; note that such a c always exists. We will show that, for a k-regular strongly regular graph Γ, the clique adjacency bound gives ω(Γ) ⌊1 − k/s⌋. That is, the clique adjacency bound is always at least as good as the Delsarte bound when applied to strongly regular graphs. This follows from Theorem 4.1 together with Theorem 4.2 below. More interestingly, we will also show that the clique adjacency bound does better than the Delsarte bound for infinitely many feasible parameter tuples for strongly regular graphs. In this section we consider the univariate polynomial C Γ (f (t), g(t)) in the variable t, where f (t) and g(t) are linear polynomials in t. The main idea is to choose the linear polynomials f and g such that there exists t ∈ R such that C Γ (f (t), g(t)) < 0, f (t) ∈ Z, and g(t) is an integer at least 2. We begin by stating one of the main results of this paper. Theorem 4.2. Let Γ be a strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r s. Then, Proof. Follows from Corollary 4.5 and Corollary 4.8 below. Observe that, together with Theorem 4.1, Theorem 4.2 shows that the clique adjacency bound always does as well as the Delsarte bound for strongly regular graphs. Now we can state our first polynomial identity, which shows that the clique adjacency polynomial is negative at a certain point. Let Γ be a connected strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r > s. Then Proof. The equality follows from direct calculation (see Appendix A), using the equalities in Proposition 3.1. The right-hand side is negative since s < 0 and r 0. Let Γ be a strongly regular graph with parameters (v, k, λ, µ) such that both µ/s and k/s are integers. Then by Lemma 4.3, together with Theorem 4.1, we recover the Delsarte bound, i.e., ω(Γ) ⌊1 − k/s⌋. It remains for us to deal with the situation when k/s and µ/s are not integers. In the remainder of this section, motivated by Lemma 4.3, we consider integral points (x, y) ∈ Z 2 close to (−µ/s, 2 − k/s) such that C Γ (x, y) is negative. We deal with the type I and type II cases separately. 4.1. Type-I strongly regular graphs. Let Γ be a type-I strongly regular graph (or conference graph) with v vertices. By Proposition 3.2 we have −µ/s = r and −k/s = 2r. Therefore, we consider integral points (x, y) close to (r, 2 + 2r) at which to evaluate the clique adjacency polynomial. In view of Proposition 3.3, we evaluate C Γ (x, y) at points of the form (r − t, a + 2r − 2t) for some a ∈ N, thinking of t as the fractional part of r. Lemma 4.4. Let Γ be a type-I strongly regular graph with v vertices and eigenvalues k > r > s. Then Proof. The equalities follow from direct calculation (see Appendix A), applying Proposition 3.1 and the definition of a type-I strongly regular graph. The right-hand side of Equation (2) is a cubic polynomial in the indeterminate t with positive leading coefficient. Furthermore, since for a type-I strongly regular graph we have s = −( √ v + 1)/2, we observe that the smallest zero of the right-hand side of Equation (2) is equal to 3/4+( √ v− v + 5/4)/2. Hence C Γ (r−t, 2+2r−2t) is negative for t < 3/4 + ( √ v − v + 5/4)/2. We use this observation in the next result, which can be used with Theorem 4.1 to obtain the Delsarte bound for conference graphs. The next corollary follows in a similar fashion. 4.2. Type-II strongly regular graphs. Let Γ be a type-II strongly regular graph with parameters (v, k, λ, µ). Again, in view of Proposition 3.3, we evaluate C Γ (x, y) at points of the form (−µ/s − t, a − k/s − t) for some a ∈ Z, thinking of t as the fractional part of −µ/s. Lemma 4.7. Let Γ be a strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r s. Then Moreover, if Γ is co-connected then these polynomials have positive leading coefficients. Proof. The equalities follow from direct calculation (see Appendix A), using the equalities in Proposition 3.1. By Proposition 3.5 if Γ is co-connected then the polynomials have positive leading coefficients. Let t = frac (−µ/s). Then, using Proposition 3.3 and Equation (3), we have Suppose first that Γ is co-connected. The right-hand side of Equation (3) is negative on the open interval (η, 1), where η = (2s − r)(r + 1)/(v − 2k + λ) is negative. Hence the corollary holds for Γ. On the other hand, for complete multipartite graphs we have t = 0, in which case the right-hand side of Equation (3) is negative. The next corollary follows similarly, using the fact that the right-hand side of Equation (4) Corollary 4.9. Let Γ be a co-connected type-II strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r s. Suppose that Then C Γ (⌊−µ/s⌋, ⌊1 − k/s⌋) < 0. Remark 4.10. We remark that if a type-II strongly regular graph satisfies the hypothesis of Corollary 4.9 then its complement cannot also satisfy the hypothesis. Indeed, suppose that Γ satisfies the hypothesis of Corollary 4.9. Since frac (−k/s) > 0 we have that s = −1 and hence Γ is connected. Then, using Proposition 3.4, we see that the complement of Γ also satisfies the hypothesis of Corollary 4.9 if 0 < frac ((v − k − 1)/(r + 1)) < 1 − (s 2 + s)/µ. How sharp is the clique adjacency bound? In this section we show that the clique adjacency bound is sharp for strongly regular graphs in certain instances. We also comment on the sharpness of the clique adjacency bound for general strongly regular graphs. Theorem 5.1. Let Γ be a strongly regular graph with parameters (v, k, λ, µ) and eigenvalues k > r s. Suppose that λ + 1 −k/s. Then the clique adjacency bound is equal to λ + 2. Since s < −1, it follows that r + 1 > −k/s. Multiplying this inequality by −s gives −s(r + 1) > k. Since both s and r are integers, we have −s(r + 1) k + 1 Now by rearranging and substituting µ = k + rs, we obtain the inequality 1 + (µ + 1)/s 0 as required. Proof. Let c ∈ {2, . . . , λ + 2} and let b be an integer. If b 0, then from the definition of the clique adjacency polynomial C Γ (x, y), we see that C Γ (b, c) 0, so we now assume that b is positive. A calculation (see Appendix A) shows that This quantity is nonnegative since b and λ + 2 − c are nonnegative integers, the product of two consecutive integers is nonnegative, and k − λ− 1 is also nonnegative by Proposition 3.5. Hence as required. Now we prove Theorem 5.1. Proof of Theorem 5.1. Firstly, if Γ is disconnected then Γ is the disjoint union of complete graphs and hence contains cliques of size λ + 2. Therefore the clique adjacency bound is at least λ + 2. Now we assume that Γ is connected. By Lemma 5.3, the clique adjacency bound is less than λ + 2 only if there exists some integer b such that C Γ (b, λ + 2) is less than zero. To ease notation set f (x) := C Γ (x, λ + 2). Hence It suffices to show that there does not exist any integer b such that f (b) < 0. Observe that the polynomial f (x) is a quadratic polynomial in the variable x. Furthermore, the leading coefficient of f (x) is v − λ − 2 0, and f (0) = 0. Let ξ be the other zero of f (x). Now, f (x) is negative if and only if x is between 0 and ξ. Hence, if f (−1) and f (1) are both nonnegative then there are no integers b such that f (b) < 0. As in the proof of the previous result f (−1) is nonnegative. Therefore Lemma 5.2 completes the proof for type-II strongly regular graphs. The inequality λ + 1 −k/s only holds for type-I strongly regular graphs on 5 vertices or 9 vertices (where we have equality). One can explicitly compute the clique adjacency bound for these two cases: the unique (5, 2, 0, 1)-strongly regular graph and the unique (9, 4, 1, 2)-strongly regular graph. For each of these graphs the clique adjacency bound is equal to λ + 2. Now we give a couple of remarks about Theorem 5.1. Remark 5.4. For strongly regular graphs with λ 1, it is easy to see that the clique number is λ + 2. By Theorem 5.1, the clique adjacency bound is equal to the clique number for such graphs. Let Γ be a strongly regular graph with parameters (v, k, λ, µ). By Proposition 3.1, we see that k = −s(r + 1) − r + λ. Therefore, for strongly regular graphs with λ = 2 and r 2, we have λ + 1 = 3 −k/s, and so Theorem 5.1 applies to such graphs. Remark 5.5. We conjecture that if the clique adjacency bound is less than −k/s then λ + 1 −k/s. We have verified this conjecture for all feasible parameter tuples for strongly regular graphs on up to 1300 vertices, making use of Brouwer's website [7]. In Table 1, we list all the feasible parameter tuples for strongly regular graphs on at most 150 vertices to which we can apply either Theorem 2.1 or Theorem 2.4. In other words, Table 1 displays the feasible parameters for strongly regular graphs on at most 150 vertices for which the clique adjacency bound is strictly less than the Delsarte bound. In the column labelled 'Exists', if there exists a strongly regular graph with the appropriate parameters then we put '+', or '!' if the graph is known to be unique; otherwise, if the existence is unknown, we put '?'. In the final column of Table 1, we put 'Y' (resp. 'N') if there exists (resp. does not exist) a strongly regular graph with the corresponding parameters that has clique number equal to the clique adjacency bound, otherwise we put a '?' if such existence is unknown. We refer to Brouwer's website [7] for details on the existence of strongly regular graphs with given parameters. For the parameter tuples in Table 1, the Delsarte bound is equal to the clique adjacency bound plus 1. As an example of a parameter tuple for which the clique adjacency bound differs from the Delsarte bound by 2, we have (378, 52, 1, 8) for which there exists a corresponding graph [11]. For this graph the Delsarte bound is 5, but the clique adjacency bound is 3. Feasible parameters for which there does not exist a corresponding strongly regular graph whose clique number is equal to the clique adjacency bound include (16,10,6,6) and (27,16,10,8). However, we ask the following question. Do there exist strongly regular graphs with parameters (v, k, λ, µ), with k < v/2, such that every strongly regular graph having those parameters has clique number less than the clique adjacency bound? Hoffman bound vs Delsarte bound vs clique adjacency bound Let Γ be a connected non-complete regular graph with v vertices, valency k, and second largest eigenvalue r < k. Then the complement Γ of Γ is a regular graph with valency k = v − k − 1 and least eigenvalue s = −r − 1 < 0. We may obtain a bound for the clique number of Γ by applying the Hoffman bound (also called the ratio bound) [13,Theorem 2.4.1] on the size of a largest independent set (coclique) of Γ. This gives If Γ is strongly regular, then it is known (and follows from the relations of Proposition 3.1) that the Delsarte bound for ω(Γ) is the same as that given by the Hoffman bound above. Now the Delsarte bound applies not only to strongly regular graphs, but also to the graphs {Γ 1 , . . . , Γ d } of the relations (other than equality) of any d-class symmetric association scheme (see [13,Corollary 3.7.2]). Thus, if Γ is such a graph, having valency k and least eigenvalue s, then ω(Γ) ⌊1 − k/s⌋. Here is an interesting illustrative example. Let ∆ be the edge graph (or line graph) of the incidence graph of the projective plane of order 2. Then ∆ is the unique distance-regular graph with intersection array {4, 2, 2; 1, 1, 2}. Now let ∆ 3 be the graph on the vertices of ∆, with two vertices joined by an edge if and only if they have distance 3 in ∆. Then ∆ 3 is the graph of a relation in the usual symmetric association scheme associated with a distance-regular graph, where two vertices are in relation i precisely when they are at distance i in the distance-regular graph. The graph ∆ 3 has diameter 2 and is edge-regular (but not strongly regular) with parameters (v, k, λ) = (21, 8, 3). The clique adjacency bound for ∆ 3 is 4. The least eigenvalue of ∆ 3 is − √ 8, and the Delsarte bound gives 3, and indeed, ω(∆ 3 ) = 3. However, the complement of ∆ 3 has least eigenvalue −1 − √ 8, and the Hoffman bound for independent sets in the complement of ∆ 3 gives 5. Thus, for ∆ 3 , the Delsarte bound is better than the clique adjacency bound which is better than that obtained from the Hoffman bound. However, the three bounds are for different classes of graphs. For example, there may well be an edge-regular graph with parameters (21, 8, 3) and clique number 4. It would be interesting to find one. We conjecture that if Γ is any connected non-complete edge-regular graph, then the clique adjacency bound for ω(Γ) is at most that obtained from the Hoffman bound for Γ. Appendix A. Algebraic computational verification of identities In this appendix we present the algebraic computations in Maple [3] that were used to verify certain identities employed in this paper. These identities were also checked independently using Magma [5]. We start up Maple (version 18) and assign to C the clique adjacency polynomial.
5,479.6
2016-04-28T00:00:00.000
[ "Mathematics" ]
The proto-oncogene c-myc acts through the cyclin-dependent kinase (Cdk) inhibitor p27(Kip1) to facilitate the activation of Cdk4/6 and early G(1) phase progression. Progression through the early G(1) phase of the cell cycle requires mitogenic stimulation, which ultimately leads to the activation of cyclin-dependent kinases 4 and 6 (Cdk4/6). Cdk4/6 activity is promoted by D-type cyclins and opposed by Cdk inhibitor proteins. Loss of c-myc proto-oncogene function results in a defect in the activation of Cdk4/6. c-myc(-/-) cells express elevated levels of the Cdk inhibitor p27(Kip1) and reduced levels of Cdk7, the catalytic subunit of Cdk-activating kinase. We show here that in normal (c-myc(+/+)) cells, the majority of cyclin D-Cdk4/6 complexes are assembled with p27 and remain inactive during cell cycle progression; their function is presumably to sequester p27 from Cdk2 complexes. A small fraction of Cdk4/6 protein was found in lower molecular mass catalytically active complexes. Conditional overexpression of p27 in c-myc(+/+) cells caused inhibition of Cdk4/6 activity and elicited defects in G(0)-to-S phase progression very similar to those seen in c-myc(-/-) cells. Overexpression of cyclin D1 in c-myc(-/-) cells rescued the defect in Cdk4/6 activity, indicating that the limiting factor is the number of cyclin D-Cdk4/6 complexes. Cdk-activating kinase did not rescue Cdk4/6 activity. We propose that the defect in Cdk4/6 activity in c-myc(-/-) cells is caused by the elevated levels of p27, which convert the low abundance activable cyclin D-Cdk4/6 complexes into unactivable complexes containing higher stoichiometries of p27. These observations establish p27 as a physiologically relevant regulator of cyclin D-Cdk4/6 activity as well as mechanistically a target of c-Myc action and provide a model by which c-Myc influences the early-to-mid G(1) phase transition. D-type cyclins are the first cyclins to be expressed during the early-to-mid G 1 phase of the cell cycle (1). Cyclin D-Cdk4/6 1 complexes are believed to mediate a key signaling connection between the extracellular environment and the intrinsic cell cycle clock. Inappropriate activation of cyclin D-Cdk4/6 complexes and the consequent hyperphosphorylation of the retinoblastoma protein (Rb) are common events in a variety of human tumors (2,3). The activation of cyclin E-Cdk2 complexes has been implicated as the major function of cyclin D-Cdk4/6 complexes (4). The activation process of cyclin E-Cdk2 complexes by cyclin D-Cdk4/6 has been shown to involve both catalytic and stoichiometric mechanisms, viz. the phosphorylation of Rb and the sequestration of the Cdk inhibitors p21 Cip1 and p27 Kip1 , respectively (1). In normal cells, however, both cyclin D-Cdk4/6 and cyclin E-Cdk2 complexes appear to be required in sequential fashion to elicit the hyperphosphorylation and inactivation of Rb (5). The expression of the c-myc proto-oncogene is closely correlated with proliferation, and removal of growth factors at any point in the cell cycle results in its prompt down-regulation (6,7). c-myc is not expressed in quiescent cells, but is rapidly induced by growth factors (8,9); and ectopic expression in quiescent cells can elicit entry into S phase (10,11). Overexpression of c-Myc in growing cells leads to reduced growth factor requirements and a shortened G 1 phase (12), whereas reduced expression causes lengthening of the cell cycle (13). c-Myc has been likened to a cell-autonomous rheostat, with engineered changes in its expression resulting in incremental changes in proliferation largely independent of the outside environment (14). A striking parallel between the cyclin D and c-Myc pathways is that both act as growth factor sensors by channeling environmental cues to drive the cell cycle engine. However, the manner in which they may be mechanistically connected is not understood. We used gene targeting to eliminate c-myc expression in an immortalized rat fibroblast cell line (15). The c-myc Ϫ/Ϫ cells are viable, but display a significant lengthening of both G 1 and G 2 , resulting in a 3-fold reduced proliferation rate. Analysis of key cell cycle regulatory components showed that the absence of c-Myc coordinately reduced the activity of all cyclin-Cdk complexes (16). The expression of the p27 protein was elevated 2-3-fold, and the expression of Cdk7 was reduced by a similar factor. During entry of quiescent cells into the cell cycle, the earliest and largest defect was a Ͼ10-fold reduction of cyclin D-Cdk4/6 activity, which resulted in a significant delay in the phosphorylation of Rb. Although the expression of Cdk4 is reduced in c-myc Ϫ/Ϫ cells (17), the defect in Cdk4/6 activity during the G 0 -to-S phase transition is significantly greater (16). In this study, we report an analysis of the role of p27 and Cdk7 in the regulation of Cdk4/6 activity in the presence and absence of c-Myc. MATERIALS AND METHODS Cell Lines and Culture Conditions-TGR-1 is a subclone of the Rat-1 cell line, and HO15.19 is a c-myc-null derivative constructed by sequen-tial gene targeting (15). HO15.19 derivatives expressing ectopic cyclin D1 were constructed by retrovirus vector transduction of full-length murine or human cyclin D1 cDNA (16). Rat1p27 cells express p27 under the control of the tTA (18) and were obtained from Bruno Amati (19). Cultures were grown in Dulbecco's modified Eagle's medium supplemented with 10% calf serum at 37°C in an atmosphere of 5% CO 2 . Rat1p27 cells were grown in the presence of 2 g/ml tetracycline to block the expression of p27. Great care was taken that cultures were cycling asynchronously and were in a rapid and exponential phase of growth. Briefly, cells were always split at subconfluent densities (Ͻ50%) and at relatively low dilution (1:10 for c-myc ϩ/ϩ and 1:4 for c-myc Ϫ/Ϫ cells). Cultures can thus be maintained continuously at densities of 10 -50% confluence (to avoid any contact inhibition), and the relatively frequent passaging (every 3-4 days) and media changes maintain a rapid growth rate. This regimen was followed for a minimum of two passages before cells were harvested for biochemical experiments. In G 0 synchronization experiments, Rat1p27 cells were trypsinized and replated in the presence of 2 g/ml tetracycline (p27-OFF condition) or 0 g/ml tetracycline (p27-ON condition) 12-16 h prior to serum starvation (0.25% calf serum), which was initiated in 95% confluent cultures and maintained for 48 h. To induce cell cycle reentry, cells were trypsinized and replated at 50% confluence in the presence of 10% calf serum. Tetracycline concentrations were kept constant throughout the starvation and restimulation periods. Flow cytometry was performed as indicated (15). Labeling of cells with CFSE was performed as described (20,21). Cells were labeled in suspension for brief periods of time (5 min) and then replated under standard exponential phase culture conditions. Cells were cultured for a minimum of one cell cycle (24 h for c-myc ϩ/ϩ and 48 h for c-myc Ϫ/Ϫ cells) to allow the clearing of unincorporated dye before the dye dilution experiments were commenced. Immunoprecipitation Kinase Assays-Kinase assays were performed as described (16,23). Briefly, cultures at the indicated time points were harvested with trypsin; washed with ice-cold Dulbecco's phosphatebuffered saline; and lysed for 2 h at 4°C in buffer containing 50 mM HEPES (pH 8), 150 mM NaCl, 2.5 mM EGTA, 1 mM EDTA, 1 mM dithiothreitol, 10% glycerol, 0.1% Tween 20, and protease and phosphatase inhibitors. Protein concentrations were determined with the Bio-Rad protein assay. Cyclin D1 was immunoprecipitated from 500 g of extract with 1 g of anti-cyclin D1 antibody and 20 l of Gammabind G-Sepharose beads (Amersham Biosciences). Cdk2 was immunoprecipitated from 200 g of extract with 1 g of anti-Cdk2 antibody and 20 l of protein A-agarose beads (Sigma). 1 g of glutathione S-transferase-Rb (24) and 2 g of histone H1 (Roche Molecular Biochemicals) were used as substrates to assay Cdk4 and Cdk2 activities, respectively. Kinase reactions were displayed by SDS-PAGE and analyzed with a PhosphorImager. CAK activation assays of Cdk4 and Cdk2 complexes were performed as described (16). Size Exclusion Chromatography-TGR-1 or HO15.19 cultures in exponential growth were lysed as described for kinase assays. 1 mg of total protein (300 l of extract) was chromatographed on a Superdex 200 column (24-ml bed volume) using a fast protein liquid chromatography system (Amersham Biosciences) at a flow rate of 0.5 ml/min. The column was run in lysis buffer and calibrated with gel filtration standards (Bio-Rad). 500-l fractions were collected; 80 l of each fraction was analyzed by immunoblotting, and the remainder was immunoprecipitated and assayed for Cdk kinase activity as described above. RESULTS The Majority of Cdk4 Is Found in Inactive Complexes in Both c-myc ϩ/ϩ and c-myc Ϫ/Ϫ Cells-The composition and activity of cyclin D1-Cdk4 complexes were examined in growing c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cells by size exclusion chromatography. Both cultures were cycling asynchronously in a rapid and exponential phase of growth (see "Materials and Methods"). Column fractions were immunoprecipitated with anti-cyclin D1 antibody; assayed for Cdk4 kinase activity; and then immunoblotted with antibody to cyclin D1, Cdk4, or p27 (Fig. 1). In both c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cells, the Cdk4 and cyclin D1 proteins coeluted as a broad peak between 50 and 200 kDa. The p27 protein eluted as a sharper peak between 100 and 200 kDa. Cdk4 activity peaked in two fractions between 70 and 100 kDa and was reduced ϳ3-fold in c-myc Ϫ/Ϫ cells. The elution profile of Cdk6 was the same as that of Cdk4 (data not shown). The Rat-1 cells under study here express very low levels of cyclins D2 and D3 (16), which precluded their analysis. Densitometric analysis of the immunoblots revealed that in c-myc ϩ/ϩ cells, ϳ80% of the Cdk4 protein was found in 125-200-kDa complexes that were catalytically inactive and comigrated with the peak of the p27 protein. p27 thus appears to be capable of both binding and inhibiting the activity of cyclin D1-Cdk4 complexes. Only a relatively small fraction of the Cdk4 protein (20%) was found in lower molecular mass (70 -100 kDa) catalytically active complexes that migrated adjacent to the major peak of the Cdk4, cyclin D1, and p27 proteins. Although the resolution of the columns did not allow us to determine whether the lower molecular mass catalytically active complexes were free of p27 or contained low stoichiometries of p27, several reports in the literature indicate that both p27 and p21 promote the assembly of cyclin D-Cdk4/6 complexes at low stoichiometries without inhibiting the Rb kinase activity, but inhibit the activity at higher stoichiometries (25)(26)(27). The c-myc Ϫ/Ϫ elution profile contained clearly elevated levels of p27, and the peak was broader. The levels of both the cyclin D1 and Cdk4 proteins were also slightly elevated in c-myc Ϫ/Ϫ cells despite the fact that the mRNAs are downregulated (16,17), suggesting that p27 stabilizes cyclin D1-Cdk4 complexes. This observation is consistent with previous reports that the half-life of the cyclin D1 protein is increased in both p27 and p21 complexes (27)(28)(29). A recent report indicates that c-Myc may affect the frequency of productive cell cycles (30). This raises the possibility that the cultures under examination here may be mixtures of cycling and non-cycling cells and that the observed changes in Cdk4 activity may be caused by variable fractions of cycling cells. To address this issue, we performed a careful analysis of proliferation rates (Fig. 1B); the resultant growth curves show that both cultures were kinetically in exponential phase. Only if a constant fraction of cells were leaving the cycle at each division would the bulk culture still give the appearance of exponential kinetics. To further examine whether the cultures were composed of discrete cohorts of cycling and non-cycling cells, we monitored the dilution of the vital dye CFSE for several days under our standard exponential phase growth conditions. CFSE is a fluorescent dye that penetrates cell membranes and is metabolized and trapped within cells. The dye is evenly distributed to daughter cells, so fluorescence intensity decreases by half with each cell division. This method has been widely used in immunology (31) and neurobiology (32) to track cells both in vitro and in vivo for up to 10 generations. Dye dilution was found to be completely uniform in both c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cell lines (Fig. 1C). In this experiment, cohorts of non-cycling (or slowly cycling) cells would be visualized as discrete peaks (or shoulders) at higher fluorescence intensity values. However, the peaks were found to be symmetrical and of the same width in both cultures at all time points. Furthermore, the rate of dye dilution (decrease in fluorescence intensity as a function of time) was completely consistent with the doubling times measured by standard growth curves. We therefore conclude that under our conditions, both c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cultures are uniformly composed of continuously cycling cells. Because the majority of assembled cyclin D1-Cdk4 complexes in normal (c-myc ϩ/ϩ ) cells appeared to be catalytically inactive, we examined the assembly and activation process during entry into S phase. Quiescent cells were induced to enter the cell cycle; and samples were collected at 2-h intervals, immunoprecipitated with anti-cyclin D1 antibody, assayed for Rb kinase activity, and then immunoblotted with antibodies to the known components of the complexes (Fig. 2). The cyclin D1, Cdk4, and p27 proteins appeared abruptly and concomitantly in the immunoprecipitates at the 6-h time point. This time coincides with the induction of cyclin D1 mRNA expression (16). The proliferating cell nuclear antigen and Cdk6 proteins appeared in the cyclin D1 immunoprecipitates with kinetics that followed closely those of Cdk4 and p27 (data not shown). In contrast, the appearance of Rb kinase activity was significantly delayed and was not fully induced until the 12-h time point in c-myc ϩ/ϩ cells ( Fig. 2A, left panel). Examination of c-myc Ϫ/Ϫ cells revealed a very similar profile, except that the abundance of all the components was somewhat increased and that the Rb kinase activity was greatly reduced (Fig. 2B, left panel). A densitometric quantification of the immunoblots of cyclin D1, Cdk4, and p27 revealed that all proteins were present in the immunoprecipitates at low basal levels in quiescent cells and remained relatively constant at the 2-and 4-h time points (Fig. FIG. 1. Composition and activity of cyclin D-Cdk4 complexes in cycling cells. A, gel filtration profiles. Extracts of exponentially growing c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cells were chromatographed on a Superdex 200 column. Cultures were kept continuously in exponential phase growth for a minimum of six doublings prior to harvest. Individual fractions were immunoprecipitated with anti-cyclin D1 antibody and either immunoblotted as indicated or assayed for Rb kinase activity ( 32 P-Rb). The experiment was repeated three times with consistent results. CycD1, cyclin D1. B, proliferation profiles of exponentially cycling cultures. Growth curves were generated on cultures maintained under the conditions described for A. Curve fits of the experimental data points yielded exponential functions with R 2 values of 0.997 and 0.975 for c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cells, respectively. Doubling times calculated from these functions were 20.5 and 52 h for c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cells, respectively. C, CFSE dye dilution profiles of exponentially cycling cultures. Cultures maintained in exponential phase under the conditions described for A were labeled with CFSE and replated at 20 -30% confluent densities. Dye dilution was monitored on successive days (d) using flow cytometry. The mean intensity values of the peaks (in arbitrary units) were as follows: for c-myc ϩ/ϩ cells, 22 (day 1) and 3 (day 3); and for c-myc Ϫ/Ϫ cells, 21 (day 2), 14 (day 3), and 9 (day 4). The calculated dye dilution half-times for these values are 16 h (c-myc ϩ/ϩ cells) and 40 h (c-myc Ϫ/Ϫ cells). p27 Acts Downstream of Myc to Inhibit Cdk4/6 Activity 2, A and B, right panels). The maximum induction ratios were as follows: cyclin D1, 7-fold in c-myc ϩ/ϩ cells and 11-fold in c-myc Ϫ/Ϫ cells; Cdk4, 6-fold in c-myc ϩ/ϩ cells and 9-fold in c-myc Ϫ/Ϫ cells; and p27, 19-fold in c-myc ϩ/ϩ cells and 48-fold in c-myc Ϫ/Ϫ cells. Gel filtration chromatography of extracts collected at the time of peak activity (12-14 h) revealed profiles very similar to those shown in Fig. 1 (data not shown). c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cells express equivalent amounts of the Cdk inhibitors p15 INK4b and p16 INK4a (19), and neither express detectable levels of p18 INK4c and p19 INK4d (data not shown). These results are consistent with the hypothesis that the majority of the newly synthesized cyclin D1 protein is rapidly assembled with Cdk4/6 and multiple molecules of p27. Of particular interest is the observation that complexes are apparently fully assembled at early times, but not activated until much later. The fact that the complexes are eventually activated without significant changes in overall stoichiometry further corroborates the interpretation that only a small fraction of total complexes become catalytically active. Conditional Overexpression of p27 Results in a Phenotype Resembling Loss of c-Myc-The observation that the majority of cyclin D1-Cdk4/6 complexes appear to be bound by multiple molecules of p27 and are not activated even in c-myc ϩ/ϩ cells predicts that increasing the p27 pool will inhibit the remaining active complexes. Transient overexpression of p27 has been shown to cause G 1 arrest (33). A stable Rat-1-derived cell line, Rat1p27 (19), in which p27 expression is controlled by the tTA tetracycline-controlled transactivator (18), proliferates normally in the presence of 2 g/ml tetracycline. Withdrawal of tetracycline, which induces p27 expression, reduced proliferation ϳ2-fold (data not shown). To analyze the consequences of elevated p27 expression during progression from G 0 to S phase, it was necessary to prepare quiescent cells with increased levels of p27. This was accomplished by removing tetracycline from the medium 12-16 h prior to as well as during a standard 48-h serum deprivation period. Cells thus treated were quiescent and displayed maximum p27 induction. Cell cycle re-entry was elicited by serum stimula-tion in the continued absence of tetracycline. Control cultures were treated identically, but were supplemented with 2 g/ml tetracycline throughout the regimen to keep the p27 transgene inactive. Analysis of cell cycle progression by flow cytometry (Fig. 3, A and B) showed normal S phase entry in the presence of tetracycline (p27-OFF) at 12 h after serum stimulation, the same as in parental Rat-1 cells. In the absence of tetracycline (p27-ON), the S phase peaks were significantly diminished as well as delayed to 16 -18 h. This profile was similar to that seen in c-myc Ϫ/Ϫ cells, which began to enter S phase at 20 -22 h. As previously reported (15), bromodeoxyuridine labeling analyses showed that the entire cell population was delayed in S phase entry and that entry was spread over a longer time period, rather than a portion of the culture failing to enter the cell cycle. Immunoblot analysis of Rat1p27 cells showed clearly elevated levels of the p27 protein in quiescent cells in the absence of tetracycline (p27-ON), but not in its presence (p27-OFF) (Fig. 3C). The elevated levels of p27 under the p27-ON condition persisted for Ͼ24 h after serum stimulation, but were reduced at later times, which may account for the leakiness of the block and continued proliferation in the absence of tetracycline, albeit at reduced rates. In agreement with the previous results, the levels of cyclin D1 were also noticeably increased in the presence of elevated p27. Given that increased p27 expression impeded S phase entry, we investigated the extent to which the molecular landmarks of this transition resembled those seen in c-myc Ϫ/Ϫ cells. The absence of c-Myc during the G 0 -to-S phase transition results in significantly reduced (Ͼ10-fold) Cdk4/6 activity, delayed phosphorylation of Rb, delayed and dampened activation of Cdk2, and delayed induction of cyclin A (16). Under the p27-OFF condition, the onset of Rb phosphorylation was first evident at 4 h, whereas under the p27-ON condition, significant Rb phosphorylation was minimum even at the 12-h time point (Fig. 3C). 24 h after serum stimulation, Rb phosphorylation was still appreciably defective. In comparison, Rb phosphorylation is first detectable at 8 h in c-myc Ϫ/Ϫ cells and at 6 h in parental c-myc ϩ/ϩ cells (16). Rb phosphorylation thus appears to be even more defective in the presence of excess p27 than in the absence of c-Myc. The composition and activity of Cdk4 complexes were examined during the G 0 -to-S phase transition under the p27-OFF and p27-ON conditions and compared with the total expression of the constituent proteins. Immunoblot analysis (Fig. 4A) showed that total cyclin D1 levels were highest early in the transition (6 h) and declined at later times (18 and 30 h) under the p27-OFF condition; in the p27-ON cells, cyclin D1 levels were elevated throughout and peaked at much later times (18 h). Cdk4 and Cdk2 levels were constant throughout the time course under both conditions. Cdk2 showed a shift to higher mobility indicative of CAK phosphorylation at 18 h in p27-OFF cells, but not until 30 h in p27-ON cells. This result is consistent with observations that p27 can antagonize the phosphorylation of Cdk complexes by CAK (34,35). Immunoprecipitation with anti-cyclin D1 antibody (Fig. 4B) from extracts of p27-OFF cells showed the expected induction of cyclin D1 early in the transition (6 h), and both the Cdk4 and p27 proteins were efficiently co-immunoprecipitated at this time. Cdk4 activity was strongly induced at later times. Under the p27-ON condition, cyclin D1-Cdk4 complexes were more abundant, were present at high levels at much later times, and at all times contained high levels of the p27 protein. The activity of the complexes was low throughout, although a weak induction was evident at 18 h. The defect in Cdk4 activation in the p27-ON cells was ϳ15-fold in this experiment. p27 Acts Downstream of Myc to Inhibit Cdk4/6 Activity A comparison of Cdk4 and Cdk2 activities and the induction of cyclins A and B1 were monitored in a separate experiment (Fig. 4C). In p27-OFF cells, strong induction of both Cdk4 and Cdk6 activities was detected at 16 h, whereas neither activity was apparent in p27-ON cells until 24 h. The defect in activation in p27-ON cells was 5-7-fold for both Cdk4 and Cdk2; however, Cdk4 activity was barely elevated above the background level in p27-ON cells, whereas Cdk2 activity was substantially induced. Thus, the defect in activation is more pronounced for Cdk4 than for Cdk2. Both cyclins A and B1 were detected at 16 h in p27-OFF cells, but not until 24 h in p27-ON cells. In summary, elevated levels of p27 cause a significant delay in the G 0 -to-S phase transition that is characterized by a strong defect in Cdk4 activation and Rb phosphorylation and a delay in the activation of Cdk2 and the expression of cyclins A and B1. These molecular phenotypes are very similar to those caused by the absence of c-Myc (16). Cyclin D1 (but Not Cdk7) Rescues Cdk4 Activity in c-myc Ϫ/Ϫ Cells-The hypothesis that increased p27 expression contributes to the Cdk4 defect seen in c-myc Ϫ/Ϫ cells by binding to the small fraction of potentially activable cyclin D1-Cdk4/6 complexes and converting them to unactivable complexes containing higher stoichiometries of p27 predicts that increasing the pool of cyclin D will rescue Cdk4 activity by allowing the assembly of additional complexes. To test this prediction, we constructed numerous clonal cell lines ectopically expressing a murine or human cyclin D1 transgene (16). All cell lines were screened by immunoblotting for expression of the exogenous cyclin D1 protein, and clones with a moderate level of overexpression were chosen. Kinase assays of two representative clones showed that the increase in cyclin D1 expression completely rescued Cdk4 activity, but did not rescue Cdk2 activity (Fig. 5A). Cdk7 is expressed at reduced levels in c-myc Ϫ/Ϫ cells (16). Because cyclin D1-Cdk4/6 complexes in c-myc Ϫ/Ϫ cells are assembled at essentially normal levels in early G 1 , but subsequently fail to be activated, we investigated whether phosphorylation by CAK may be limiting. Cyclin D1-Cdk4/6 complexes were immunoprecipitated from c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cells at successive times during the G 0 -to-S phase transition, incubated with active recombinant CAK, and subsequently assayed for Rb kinase activity (Fig. 5B). CAK did not increase the activity of cyclin D1-Cdk4/6 complexes at any time point in either c-myc ϩ/ϩ or c-myc Ϫ/Ϫ cells. The CAK preparation was catalytically active because it strongly activated purified recombinant cyclin A-Cdk2 complexes under the same assay conditions. CAK also activates native Cdk2 complexes immunoprecipitated from Rat-1 cells (16). We also ectopically expressed Cdk7 in c-myc Ϫ/Ϫ cells using the same strategy as that used for cyclin D1. Stable clonal cell lines were isolated, and restoration of Cdk7 expression levels to those seen in c-myc ϩ/ϩ cells was demonstrated by immunoblotting. However, in this case, nei- D1. B, extracts were immunoprecipitated with anti-cyclin D1 antibody and either immunoblotted as indicated or assayed for Rb kinase activity ( 32 P-Rb). C, Cdk4 and Cdk2 activities were assayed in cyclin D1 immunoprecipitates ( 32 P-Rb) and Cdk2 immunoprecipitates ( 32 -H1), respectively. The expression of p27, cyclin A (CycA), and cyclin B1 (CycB1) was determined by immunoblotting. p27 Acts Downstream of Myc to Inhibit Cdk4/6 Activity ther the growth rate nor Rb phosphorylation was rescued (data not shown). DISCUSSION During the entry of quiescent cells into the cell cycle, a transition that is acutely dependent on strong and sustained mitogenic signaling, the earliest and largest defect in c-myc Ϫ/Ϫ cells is a Ͼ10-fold reduction of cyclin D-Cdk4/6 activity. The magnitude of the activity defect cannot be explained by the relatively modest effects on cyclin D1 and Cdk4 expression (16,17). In fact, we show here that although c-myc-null cells assembled slightly more cyclin D1-Cdk4/6 complexes than normal cells, the complexes remained largely inactive. Several studies have indicated that members of the Cip/Kip family of Cdk inhibitors are required for the assembly of cyclin D-Cdk4/6 complexes (25-29, 36 -38). The picture emerging from in vitro studies is that both p27 and p21 promote the assembly of the complexes at low (1:1) stoichiometries without inhibiting the Rb kinase activity, but inhibit the activity at higher stoichiometries (25)(26)(27). p27 can also interfere with the activation of cyclin D-Cdk4/6 complexes by CAK. The experiments reported here involving conditional expression of p27 show that elevated p27 expression can potently inhibit the in vivo activity of cyclin D1-Cdk4/6 complexes. In fact, during G 0 -to-S phase progression, elevated expression of p27 elicited a remarkable molecular phenocopy of the c-myc loss-of-function phenotype: an early large defect in Cdk4/6 activity, a delay in Rb phosphorylation, a subsequent delay in the induction of cyclin E and Cdk2 activities, and finally a delay in cyclin A expression and S phase entry. Although both Cdk4/6 and Cdk2 activities were affected, the Cdk4/6 defect was earlier and larger in magnitude, exactly as seen in c-myc-null cells. It is likely that c-Myc can affect the expression level of the p27 protein by several mechanisms. One mechanism is the promotion of p27 degradation by ubiquitin-mediated proteolysis (39). Because the degradation of p27 is triggered by cyclin E-Cdk2 phosphorylation, it is unlikely that these mechanisms are operative in early G 1 at the time when cyclin D-Cdk4/6 complexes are being assembled and activated. c-Myc can also influence the expression of the p27 mRNA, which is not downregulated normally in c-myc Ϫ/Ϫ cells after mitogenic stimulation of quiescent cells (16). This study did not, however, address whether these effects are transcriptional or post-transcrip-tional. Repression of some (but not all) genes by c-Myc has been shown to involve the Inr promoter element (40); the p27 promoter contains an Inr element, and one study suggested that the repression of p27 by c-Myc may be mediated in part through this site (41). However, the mechanisms by which c-Myc affects the expression of the p27 gene need to be further investigated. How could a 3-fold increase in the steady-state level of the p27 protein result in a Ͼ10-fold defect in Cdk4/6 activity? A key observation that shed light on this mechanism was that ϳ80% of the Cdk4/6 protein was found in inactive complexes that contained high stoichiometries of p27. Gel filtration experiments showed that Cdk4/6 migrated as a broad peak of ϳ160 kDa that comigrated with the peak of cyclin D1. However, Rb kinase activity migrated as a distinct peak of 70 -100 kDa, and the distribution of p27 was skewed toward the higher molecular mass inactive complexes. Although the resolution of the columns did not allow us to determine whether the active complexes were free of p27, it is clear that the higher molecular mass complexes contained more abundant p27 (compare fractions 16 and 13 in Fig. 1) and were inactive as Rb kinases. Thus, even in c-myc ϩ/ϩ cells, only a small fraction of the total cyclin D-Cdk4/6 complexes become activated. The column profile in c-myc Ϫ/Ϫ cells was qualitatively very similar; but levels of p27 were elevated, the peak was broader, and the kinase activity of the 70 -100-kDa fractions was reduced. During the G 0 -to-S phase transition, cyclin D1 and Cdk4/6 were rapidly assembled in early G 1 into complexes that contained abundant p27, whereas the appearance of Rb kinase activity was delayed by several hours. Again, c-myc Ϫ/Ϫ cells displayed the same profile; the abundance of Cdk4/6 complexes was somewhat increased; and Rb kinase activity was greatly reduced. The absence of a significant change in the stoichiometry of the complexes in either c-myc ϩ/ϩ or c-myc Ϫ/Ϫ cells during the activation process suggests that activation does not involve rearrangement of the complexes, but is likely dependent on additional events such as phosphorylation by CAK of pre-existing complexes that contain a few (or no) molecules of p27. Taken together, the gel filtration data and the G 0 -to-S phase induction profiles suggest the following model. Mitogenic signaling induces the expression of cyclin D, which, with the aid of p27, is rapidly assembled with Cdk4/6 and transported into the nucleus. The majority of the nuclear cyclin D-Cdk4/6 complexes are bound to multiple molecules of p27 and remain inactive throughout G 1 . As previously suggested (1), the physiological function of these complexes would be to eliminate the free pool of p21 and p27 or even to actively sequester these Cdk inhibitors from the low levels of cyclin E-Cdk2 complexes present in early G 1 to facilitate their subsequent activation. A small fraction of cyclin D-Cdk4/6 complexes would be bound to only one (or no) p27 molecule and could thus become activated later in G 1 . The key prediction of this model is that even a modest increase in overall p27 levels could convert a significant fraction of the low abundance activable complexes into unactivable complexes containing higher stoichiometries of p27. This model is supported by the observation that even a modest overproduction of cyclin D1 in c-myc Ϫ/Ϫ cells can completely rescue complex activity. This is because cyclin D levels are limiting for the assembly of cyclin D-Cdk4/6 complexes, and overexpression of cyclin D1 results in the assembly of additional complexes. The observations that CAK activity is not limiting for the activation of cyclin D1-Cdk4/6 complexes in either c-myc ϩ/ϩ or c-myc Ϫ/Ϫ cells, that the expression of Cdc25A is not affected in c-myc Ϫ/Ϫ cells (42), and that the activation of cyclin D1-Cdk4/6 complexes is initiated normally in both FIG. 5. Effect of cyclin D1 and CAK on Cdk4 activity. A, the indicated cell lines were grown exponentially, and extracts were assayed for Cdk4 and Cdk2 activities as described in the legend to Fig. 4C. The cell line hu-CycD1 (where hu is human, and CycD1 is cyclin D1) was previously referred to as HO15D2 (16). The cell line mu-CycD1 (where mu is murine) was constructed in an analogous fashion. The experiment was repeated twice with consistent results. B, cell cycle entry of quiescent cells was initiated at 0 h; samples were collected at the indicated time points; and extracts were immunoprecipitated with anti-cyclin D1 antibody, incubated in the presence or absence of catalytically active CAK, and subsequently assayed for Rb kinase activity. Lower panel, activation of recombinant cyclin A-Cdk2 assayed by histone H1 kinase activity. p27 Acts Downstream of Myc to Inhibit Cdk4/6 Activity c-myc ϩ/ϩ and c-myc Ϫ/Ϫ cells at the same time after serum stimulation (ϳ12 h) (19) are all consistent with the interpretation that the major reason for the observed Cdk4/6 activity defect is the assembly of fewer potentially active complexes. Furthermore, in light of the recent report that the p21 gene is silenced in Rat-1 cells by promoter methylation (43), this model also provides an attractive explanation of why c-myc Ϫ/Ϫ Rat-1 cells are capable of proliferation, albeit at greatly reduced rates. However, it also needs to be stressed that p27 is unlikely to be the only c-Myc target relevant to regulation of cell cycle progression. For example, although the expression of p27 in the conditional Rat1p27 cell line was higher than that seen in c-myc Ϫ/Ϫ cells, the proliferation defect during either exponential growth or the G 0 -to-S phase transition was not as severe in Rat1p27 cells as in c-myc Ϫ/Ϫ cells. Similarly, if the only function of c-Myc were to promote the activity of Cdk4/6 complexes, overexpression of cyclin D would be expected to rescue cell cycle progression as well as Cdk2 activity, which is not the case. Although CAK is not limiting for Cdk4/6 activity, it appears to be limiting for Cdk2 activity (16) and may thus in part explain the effects of c-Myc on cell cycle progression in late G 1 . The facts that cyclin D1 overexpression rescues Cdk4/6 activity, but cyclin E overexpression does not rescue Cdk2 activity, are in agreement with this interpretation (data not shown). Furthermore, the lack of rescue by cyclin E is not consistent with the possibility that the major bottleneck in c-myc Ϫ/Ϫ cells is the failure of cyclin D-Cdk4/6 complexes to sufficiently sequester the elevated levels of p27. By extension, this would then argue in favor of a functional role for Cdk4/6 catalytic activity during G 1 phase progression. The function of the cyclin D pathway is subverted to a greater or lesser extent in most (if not all) cancer cells and derived cell lines. Given the importance of this signaling connection between the extracellular environment and the intrinsic cell cycle clock, it is of interest that c-Myc regulates the expression (17) as well as the activity of cyclin D-Cdk4/6 complexes by multiple mechanisms. The work reported here establishes p27 as a physiologically relevant regulator of cyclin D-Cdk4/6 activity as well as a target of c-Myc and provides a mechanistic model by which c-Myc influences the early-to-mid G 1 phase transition. Additional targets of c-Myc relevant to cell cycle regulation will undoubtedly be discovered; however, regulation of Cdk4/6 activity may provide a direct link between the ability of c-Myc to promote tumorigenesis and cell cycle progression.
8,066.8
2002-08-23T00:00:00.000
[ "Biology", "Chemistry" ]
The Trade-Off Between Economic Performance and Environmental Quality: Does Financial Inclusion Asymmetrically Matter for Emerging Asian Economies? This study examines the role of nancial inclusion on the environment-economic performance in the top ve Asian emerging economies. The data used for empirical investigation covers the time period from 1995 to 2019. Financial inclusion is measured through bank branches, bank credit, and insurance premiums. To check long-run associations, the panel-ARDL approach has been employed for empirical analysis. The empirical evidence conrms the signicant associations between nancial inclusion-GDP nexus and nancial inclusion-CO2 nexus. The ndings show that bank branches and bank credit have a signicantly positive impact on economic growth and CO2 emissions in the long-run. However, insurance premium has no impact on economic growth but it exerts a signicant negative impact on carbon emissions in the long-run. Furthermore, energy consumption is highly sensitive to economic growth and carbon emissions. The study delivers imperative points for pollution eradication and attaining sustained economic growth. There is a need for government-level efforts to align the targets of nancial inclusion with economic growth and environmental policies. Introduction It was the early 1990s when the term nancial exclusion came to the fore and pointed out the inadequate amount of bank branches and the limited accessibility to these branches as a big hurdle in a more liberal, dynamic, and vibrant nancial sector (European Commission, 2008). Previously, the term nancial exclusion was used to de ne obstacles to access the primary nancial services and goods from the point of view of both users of these services (demand side) and producers of these services (supply side) (Rahim et al., 2009). To promote nancial inclusion demand-side works side by side the supply side. Poverty is the main factor that hinders nancial inclusion because if a majority of people are living below the poverty line they don't have enough savings to deposit in the bank accounts. Likewise, if the pace of the economy is slow then the level of investment in the economy is also sluggish resulting in low demand for loans and other nancial services. The tendency to save more can shift the poor people from low-income brackets to higher ones thus increasing their role in banks and nancial institutions which can cause the nancial services to push upwards (Reserve Bank of India, 2013). Reserve bank of India (RBI) de nes nancial inclusion as, "the process of ensuring access to appropriate nancial products and services needed by vulnerable groups such as weaker sections and low-income groups at an affordable cost fairly and transparently by mainstream institutional players." the period 1995-2019. The structure of the study is as follows. Section two describes the data and methods followed by the results and discussions in section three. In section four, we conclude the study. Methods And Data To capture the impact of nancial inclusion on economic growth and CO2 emissions in Asian emerging economies, we have borrowed the following long-run model from Van et al. (2021) and Zaidi et al. (2021). In the above models (1) and (2), GDP per capita (GDP) and carbon emissions (CO 2 ) are taken as the dependent variables and among the independent variables, nancial inclusion (FI) is included as the main variable, while energy consumption (EC), Trade, and population are control variables in our analysis. The model discussed above is a long-run model and to get short-run estimates, as well, we describe this model in the form of error correction format. In doing so, we rely upon a method that gives estimates of long-run effects with short-run effects in a single step as follows: considered genuine only if they are co-integrated and the co-integration among the variables is con rmed through the negative and signi cant estimate attached to ECM t-1 . To get the estimate of ECM t-1 rst, we generate a series of residuals labelled as ECM by using equations (3 & 4). We then replace the lagged value of this series (ECM t-1 ) in equations (3 & 4) in place of the lagged-level variables and estimate the new equation with the same number of lags as used originally. The size of the estimate attached to ECM t-1 describes the speed of adjustment towards long-run equilibrium. This method has the advantage that it can estimate e ciently for the small number of observations. Moreover, this technique can take care of the integrating properties of the variables i.e. we should not worry about whether the variable is stationary at level or rst difference because it can accommodate the mixture of variables with I(0) and I(1). Both tests have used their own tabulate new critical values for testing. The Hausman test is used to con rm the ARDL-PMG or ARDL-PM models are su cient for this empirical analysis. In the end, we check causality in a non-linear framework by conducting the panel causality test of Hatemi-J (2012). Data For empirical investigation, data has been taken for the period ranging from 1995 to 2019 for the top ve emerging economies of Asia including China, India, Japan, Indonesia, and Turkey. Table 1 provides a discussion on complete de nitions of variables, their abbreviations, and descriptive analysis of data. Data on GDP per capita, carbon emissions, energy use, trade, and population growth is sought from the World Bank. However, data on bank branches, bank credit, and insurance premiums are sourced from IMF. GDP per capita is measured at constant 2010 US$. Data on carbon emissions are measured in kilotons. Bank branches are taken as bank branches per 100,000 adults. Bank credit is measured as bank deposits in percentage. Data on life and non-life insurance premium volume to GDP (%) is taken to measure insurance premium. Data series on energy use is measured as kg of oil equivalent per capita. Trade is measured in percentage of GDP. The annual percentage of population growth is used to measure the population growth variable. Empirical Results And Discussion Panel-ARDL requires that none of the variables in the model is I(2) and the panel unit root tests tell us about the stationarity of our variables To that end, we have used panel unit root tests namely Levin, Lin and Chu (LLC), I'm, Pesaran and Shin (IPS) and ADF-Fisher. The results of the LLC show that most of the variables are stationary at rst difference except BC, Insurance, and POP. However, when we apply IPS and ADF tests all the variables are stationary at rst difference except the variable of POP. Table 2 ndings con rm that we can apply the panel-ARDL technique. As the frequency of our data is annual we have imposed a maximum of three lags and optimal lag selection is based on Akaike Information Criterion (AIC). Note: ***p<0.01; **p<0.05; and *p<0.1 After con rming the preliminary condition of Panel-ARDL we are now in a position to start the discussion on the estimates of our variables. Our dependent variables are GDP and CO2 emissions and we have used three different proxies of nancial inclusion; bank branches, bank credit, and insurance. For both GDP and CO2 models we have included all the proxies of nancial inclusion one by one. Table 3, shows the results of both in the short and long run. Moreover, cointegration tests and other diagnostics are also reported in table 3. First of all, we want to con rm whether our long-run results are cointegrated or not. Two tests of cointegration i.e. ECM t-1 and Kao con rm that our long-run estimates of GDP and CO2 are cointegrated meaning they are genuine or valid. Hausman test results have supported the panel ARDL-PMG model. First, we discuss the long-run results of the GDP and CO2 models in detail, and then the short-run results in brief. The long-run estimates of BB and BC, in the GDP model, are positively signi cant and in the case of Insurance, the estimate is insigni cant. As the variables are taken in the log form we can explain them by saying that a 1% increase in the bank branches and bank credits facilities improve the GDP by 0.021% and 0.271%. The estimate of bank credit is large as compared to the estimate of bank branches suggesting that instead of the number of branches, improved credit facilities are more helpful in increasing the GDP of the economy. As the number of branches and credit facilities in an economy increases the production activities also increase due to the easy availability of loans and other nancial services for investment in large projects that can help the economy to grow at a great pace. Moreover, nancial inclusion connects a large number of people to the nancial system of the country that brings them into the mainstream economy which also helps in the development of the economy (Sharma, 2016). The control variables EC and Trade helps the economy to grow as well -a 1% rise in EC improves the economic growth of the country by 0.853%, 1.739%, and 1.628% and a 1% rise in Trade improves the economic growth of the economy by 0.032%, 0.039%, and 0.012%. However, a 1% rise in the POP only improves the economic growth in the rst model by 0.458% whereas it is not statistically noticeable in the second and third models. Now we will discuss the long-run estimates of CO2 models. The estimated coe cients of BB and BC are positively signi cant, whereas the estimated coe cient of Insurance is signi cantly negative. In the elasticity form, we can elaborate these results by saying that a 1% rise in the bank branches and credit facilities by banks increases the CO2 emissions by 0.015% and 0.1417%. However, a 1% rise in Insurance decreases the CO2 emissions by 0.0181%. Theory suggests that nancial inclusion can affect the environment positively or negatively. Our results are suggesting that improved nancial inclusion due to the increased number of bank branches and credits facilities help the nancial sector to develop and grow which is considered as a driver in nurturing the economy due to the surge in the availability of production and consumption loans that also increase the energy demand and thus give rise to CO2 emissions (Frankel and Romer, 1999). On the other side, as the economy grows due to better nancial inclusion of the society more sophisticated and advanced technologies developed in the production process can help to reduce CO2 emissions. Similarly, the availability of credit facilities also speeds up the investment in renewable energy projects that also exert less burden on the environment. Banks provide individual loans for energy-e cient products such as LEDs, DC inverters, fuel-e cient cars, etc., besides banks also provide easy credit to the house owners for installing solar energy. The positive impact of nancial inclusion on CO2 emissions is supported by le et al. (2020), however, Renzhi and Baek (2020) found an inverted U-shape relationship between CO2 emissions and nancial inclusion. The variable of energy consumption exerted a positive impact on the CO2 emissions in all the models by the amount of 1.259%, 1.992%, and 1.635%. Conversely, a 1% rise in Trade reduces the CO2 emissions by 0.004% only in model six, whereas in models four & ve the impact of Trade is insigni cant. Finally, the estimated coe cient of POP (0.557%) is signi cant and negative in model four, while insigni cant in models ve & six. In the short run, the estimates in the GDP models are providing us with an inconclusive picture as most of them are insigni cant and appeared with mixed signs at most lags. Similarly, the short-run estimates in all CO2 models are mostly insigni cant and provide inconclusive results. Table 4, provide the results of the Granger causality which con rm one-way causality running from GDP→BC, GDP→Insurance, CO2→BC, Insurance→CO2. However, bi-directional causality is found between GDP↔BB. For detailed results see Table 4. Conclusion And Policy Implications The objective of the study is to investigate the role of nancial inclusion on environmental quality and economic performance in the top ve emerging economies of Asia including China, India, Japan, Indonesia, and Turkey for the period 1995 to 2019. Bank branches, bank credit, and insurance premiums are used to measure nancial inclusion. The panel-ARDL method is employed for empirical investigation. It is found that long-run panel cointegration exists between the focused variables of the study. In the long-run, bank branches impact on economic and environmental performance is positive inferring that as the bank branches increase it leads to increase economic performance and pollution emissions. Bank credit also results in increasing economic growth and pollution emissions in the long-run. The impact of insurance premium on economic performance is statistically insigni cant revealing that there is no association between the insurance premium and economic growth in the long run. However, insurance premiums exert a signi cant negative impact on carbon emissions con rming that the increase in insurance premiums results in reducing carbon emissions in the long-run. It is also evident that energy consumption is positively associated with economic growth and pollution emissions in the long-run. Trade impact is positive on economic growth in all three models but this effect is negative on pollution emissions only in the insurance premium regression model in the long-run. Population growth has a signi cant impact on economic-environmental performance only in bank branches regression in the long-run. In the short-run, bank branches have a positive impact on carbon emission revealing that an increase in bank branches results in rising carbon emissions. Bank credit has no association with the economic-environmental performance nexus in the short-run. Insurance premium impact is negative on economic-environment performance in the short-run concluding that due to an increase in insurance premium economic growth and carbon emissions will decrease. Authorities and policymakers of these economies should follow and embrace mitigation methods, including the adoption and installation of digital nancial inclusion in the future. Asian emerging markets should maintain sustainable development via nancial inclusion. Through nancial inclusion, the funds from the nancial institutions can be directed towards the projects of green and clean energy. Moreover, the funds should be transferred to those rms, businesses, and individuals who are involved in green innovations. The government should also articulate strict rules for the nancial institutions to lend nance for renewable energy and environmental-friendly projects, and this can be more fruitful through digital nancial inclusion. The governments should also remove barriers from digital nancial inclusion such as affordability, documentation, and trust. The study has a limitation of the availability of data. The data on digital nancial inclusion is not available earlier than 2004. Therefore, we have not included digital nancial inclusion in the analysis. This study used only three variables of nancial inclusion based on usage and access to the formal nancial services factors, while many other factors were not considered in the analysis due to the unavailability of relevant data for the Asian emerging economies. Future studies should also use other proxies of digital nancial inclusion. Future studies can be conducted on the same topic by covering more updated models and data. Future researchers may also analyze at the micro-level in the high-polluted economy. Declarations Ethical Approval: Not applicable Consent to Publish: Not applicable Authors Contributions: This idea was given by Liu Dong and Yuantao Xie. Muhammad Hafeez, Liu Dong, Yuantao Xie, and Ahmed Usman collected the data, computed data analysis and wrote the complete paper. While Liu Dong and Yuantao Xie read and approved the nal version. Consent to Participate: I am free to contact any of the people involved in the research to seek further clari cation and information Funding: Not applicable.
3,624.6
2021-09-13T00:00:00.000
[ "Economics", "Environmental Science" ]
Does the Infectious Status of Aphids Influence Their Preference Towards Healthy, Virus-Infected and Endophytically Colonized Plants? Aphids (Hemiptera: Aphididae) cause significant damage and transmit viruses to various crop plants. We aimed to evaluate how the infectious status of aphids influences their interaction with potential hosts. Two aphid (Myzus persicae and Rhopalosiphum padi) and plant (Nicotiana tabacum and Triticum aestivum) species were used. The preferences of aphids towards healthy, virus-infected (Potato Leafroll Virus (PLRV) and Barley Yellow Dwarf virus (BYDV)), and endophytic entomopathogenic fungi (EEPF)-inoculated (Beauveria bassiana and Metarhizium acridum) plants were investigated in dual-choice tests. The headspace volatiles of the different plant modalities were also sampled and analyzed. Viruliferous and non-viruliferous aphids were more attracted to EEPF-inoculated plants compared to uninoculated plants. However, viruliferous aphids were more attracted to EEPF-inoculated plants compared to virus-infected plants, while non-viruliferous insects exhibited no preference. Fungal-inoculated plants released higher amounts of aldehydes (i.e., heptanal, octanal, nonanal and decanal) compared to other plants, which might explain why viruliferous and non-viruliferous aphids were more abundant in EEPF-inoculated plants. Our study provides an interesting research perspective on how EEPF are involved in behavior of virus vector, depending on the infectious status of the latter. Introduction Aphids are herbivorous, sap-feeding insects that are regarded as crop pests in agricultural and horticultural production systems globally [1]. Aphids contribute major economic losses by causing significant damage to plants and transmitting viruses [1]. More than half of all insect-vectored plant viruses are transmitted by aphids by non-persistent, semi-persistent, or persistent modes [2,3]. The host-finding behavior of aphids is specific, and largely explains their role as important vectors of plant viruses. This behavior is mediated, in most cases, by volatile organic compounds (VOCs) that are continuously released by plants [4][5][6][7]. For example, a synthetic blend of 11 VOCs, at concentrations and ratios designed to mimic potato plants, induced a similar behavioral response to Myzus persicae (Sulzer), as a natural plant on the olfactometer [8]. Insects 2020, 11 Persistently transmitted viruses induce changes in the volatile blends emitted by the plants that they infect, leading to the attraction of virus-free aphids, which enhances propagation [9][10][11]. In contrast, insects carrying plant viruses are sensitive to the volatilome of healthy plants, and preferentially feed on them [10,[12][13][14][15][16]. Thus, the behavior of sap-feeding insects likely differs in relation to their infectious status. Some biocontrol agents (such as macroorganisms) and semiochemicals (such as alarm pheromones) that are used to control aphids enhance the spread of viruses [42][43][44]. Thus, it is important to understand the relationship between an EEPF-inoculated plant and an aphid when carrying and not carrying viruses. Such knowledge would make it possible to establish a link between the presence of EEPF in plant tissues and the behavior of a virus vector likely to spread a given virus. This information could potentially be useful for aphid/virus management. We carried out a comprehensive study using different types of insect-plant-microbe interactions. Under the context of plant virus transmission, we hypothesized that viruliferous aphids behave differently to non-viruliferous aphids towards healthy, virus infected, and EEPF-inoculated plants. To verify these hypotheses, we investigated how the presence of Beauveria bassiana (Vuill.) and Metarhizium acridum (Humber) in host plant tissues affected aphid preference compared to healthy and virus-infected plants. We then sampled and analyzed headspace volatiles to determine whether they explained aphid behavior based on the different plant modalities. Two insect-plant-virus systems were investigated. First, the Myzus-Tobacco-Potato Leafroll Virus (PLRV) (MTP) system: tobacco (Nicotiana tabacum L.) is one of the host plants of the Potato Leafroll Virus (PLRV, Family Luteoviridae, Genus Polerovirus), which is principally transmitted by M. persicae (Hemiptera: Aphididae) in a persistent manner [45]. Second, the Rhopalosiphum-Wheat-Barley Yellow Dwarf Virus (BYDV) (RWB) system: wheat (Triticum aestivum L.) is often infected by the Barley Yellow Dwarf Virus (BYDV, Family Luteoviridae, Genus Luteovirus), which is transmitted efficiently by R. padi (Hemiptera: Aphididae) in a persistent manner [10]. Plant Cultures All tobacco (cv. Xanthii) and wheat (cv. Johnson) seeds were sown in autoclaved potting soil. Tobacco seedlings were individually transplanted at the three-leaf stage to pots (7 × 7 × 7 cm). Wheat seedlings were separately transplanted at the two-leaf stage to straight sample containers (VWR; 70 mm height; 33 mm diameter; 60 mL capacity; custom-drilled with a hole 2 mm in diameter). The seedlings were then kept in a climate chamber at 22 ± 1 • C, 70 ± 10% relative humidity (RH) and 16:8 h (light: dark) photoperiod. Tobacco and wheat plants were used for the insects to multiply on, and to perform the behavioral and volatilome analyses. and BYDV-infected plants, after which they were transferred to healthy tobacco and wheat plants, respectively. Virus stock cultures were further maintained via insect transmission every 2-3 weeks, and newly infected seedlings were kept separately in net cages under the same conditions. Two entomopathogenic fungi were used: (1) Beauveria bassiana ((Balsamo-Crivelli) Vuillemin) strain GHA isolated from the commercial product Botanigard ® (Certis Europe, Bruxelles, Belgium) and (2) Metarhizium acridum ((Driver & Milner) JF Bischoff, Rehner & Humber) strain IMI330189 isolated from Green Muscle ® biopesticide [46], obtained from Reproductive Biology, Science and Technology Faculty, University Cheikh Anta Diop, Dakar, Senegal. For each product, wettable powder was dissolved in a 0.01% Tween ® 80 solution in distilled water. Thirty-five microliters of suspension were transferred to Potato Dextrose Agar (PDA: Sigma-Aldrich, St Louis, MO, USA) supplemented with chloramphenicol (0.05 g·L −1 ), and maintained in darkness in an incubator at 25 ± 1 • C for 3 weeks. Spores were collected by scraping the agar surface with a sterile L-shaped spreader (VWR, Radnor, PA, USA), and were suspended in a 0.01% Tween ® 80 solution. The concentration was adjusted to 10 8 spores·mL −1 using a Neubauer hemocytometer cell [47]. The ready-to-use suspensions were stored at −20 • C and were used within 48 h. Insect Rearing Myzus persicae strain MpCh4 and R. padi strain Xu were reared in net cages on tobacco and wheat plants, respectively, and were kept in a climate chamber at 22 ± 1 • C, 70 ± 10% RH and 16:8 h (light: dark) photoperiod. Wheat and tobacco plants at the three-leaf stage were provided every 2 and 3 weeks, respectively. Non-viruliferous (I-) and viruliferous (I+) aphids were obtained by placing 40 to 50 adults on healthy plants (HP) and virus-infected plants (either with PLRV (VP-1) or BYDV (VP-2)), respectively. Adult aphids were allowed to reproduce for 24 h, and were then eliminated. Nymphs were maintained until adults emerged (within about 1 week). EEPF-Inoculated and Virus-Infected Plants Tobacco and wheat plants were treated 7 days before use by spraying their leaves using a cosmetic sprayer from Sinide Plastic Spray Bottles (30 mL) with fine mist (0.35 mm nozzle diameter). Two milliliters of 10 8 spore·mL −1 suspension of either B. bassiana (BP) or M. acridum (MP) were used per plant. Healthy plants (HP) and virus-infected plants (VP-1 and VP-2) were sham-inoculated by spraying them with distilled water containing 0.01% Tween ® 80. The successful colonization of plant tissue by inoculated EEPF was systematically evaluated after completing each experiment. All leaves that were used in the preference tests were investigated. For plants that were used for VOC collection, once the experiment was completed, the whole upper part of each plant was collected (including stems for tobacco). In every case, samples were rinsed in tap water and treated under sterile conditions based on the method used by Rondot et al. [48]. Samples were surface sterilized separately by soaking them in 0.5% active chlorine (NaOCl) containing 0.01% Tween ® 80 for 2 min, followed by 70% ethanol (EtOH) for 2 min. They were then rinsed three times with sterile distilled water and placed on autoclaved filter paper to dry off. About three 1.5 cm 2 pieces of each leaf were used in the preference tests. Nine pieces of tissue (including two pieces of basal, central, and apical leaves and three pieces of a 1 mm thick cross-sections of tobacco stem that was replaced by an additional leaf for wheat) were collected from plants for use in VOC collection. The pieces from the same plant were grouped together and were first pressed on sterile PDA culture medium in Petri dishes to determine whether any spores were present on their surface. The pieces were then placed on a new culture medium to incubate. The disinfection process was also evaluated by plating three replicates of 100 µL of the last rinse water on three different PDA media. Afterwards, all plates were sealed and placed in darkness in an incubator at 25 ± 1 • C. Ten days later, fungal colonies growing from internal plant tissues were visually examined according to their characteristics: "white dense mycelia, becoming creamy at the edge" for B. bassiana [49] and "conidial mass dark yellow-green" for M. acridum [50]. When one tissue from a single leaf showed fungal growth, the whole leaf was classified as being endophytically colonized [48]. The results of the preference tests were only validated for endophytically colonized leaves. Plants used for VOC sampling were classified as being endophytically colonized when fungal growth was observed on at least five out of the nine tested tissues. No fungal growth was recorded in any of the rinsed water samples or on the culture media on which plant tissue imprints were marked. Five days after seedlings were transplanted, five individuals of M. persicae or R. padi from VP-1 or VP-2 that were 5 days old were confined in a clip cage [51] on the bottom of a single leaf of each tested plant for virus inoculation. The Inoculation Access Period (IAP) lasted 4 days for M. persicae on tobacco and 5 days for R. padi on wheat [52,53]. Afterwards, insects were removed from the plants with a brush. To exclude any bias related to the virus inoculation on different plant groups, healthy plants for further fungal inoculation and control plants were infested by five non-viruliferous insects. Incubation time lasted 14 days for PLRV on tobacco and 21 days for BYDV-PAV on wheat [52,53]. Virus inoculation was assessed before the preference tests by enzyme-linked immunosorbent assay (ELISA). Two kits were used following the manufacturer's instructions: Double Antibody Sandwish ELISA (DAS-ELISA) for PLRV on tobacco using a DSMZ kit and Triple Antibody Sandwich ELISA (TAS-ELISA) for BYDV-PAV on wheat using an Agdia kit (Agdia Inc., Elkhart, IN, USA). Samples were collected from the first fully expanded leaf on each plant. A plant was considered as infected if the optical density was at least twice that of the negative control. Only plants that were effectively infected were used. Design of the Preference Bioassay Dual-choice tests were implemented for both the MTP and RWB models. The experimental setup was based on the aphid dual-choice arena presented by dos Santos et al. [14], and was adapted according to our plant models ( Figure 1). Petri dishes of 9 cm in diameter were used in every case. For the MTP model ( Figure 1A), two leaf discs (1.5 cm in diameter) were randomly sampled from the tested plants, and were kept for 10 min in the dark in a box lined with wet filter paper, to allow volatile emissions to be reduced due to injury [54]. The leaf discs were then placed in a dish 4.5 cm apart from each other. Three leaf discs were sampled from three different leaves of a single plant. Leaf discs were renewed for each replicated test. For the RWB model ( Figure 1B), two pairs of 1.2 cm oval holes were pierced in the dish; 4.5 cm was left between the holes of one pair, and 5.5 cm was left between each pair of holes. One leaf from each tested plant was carefully introduced to the dish from the first hole, and then exited from the second hole on the same side, providing approximately 2 cm 2 surface area available to insects. Two leaves were used for each plant. All of the experiments were conducted under uniform lightening from 16-W cool white fluorescent lights in a climatic room at 22 ± 1 • C and 70% RH. Twenty newly molted adults were released in the center of the arena, which was immediately closed, using the cap of a 1.5 mL Eppendorf tube (VWR, Radnor, PA, USA). The number of aphids on the leaves or leaf discs was recorded 60 min later. Choice tests were first implemented with only healthy plants (HP) to check for bias in the experimental setup. Pairwise comparisons were subsequently performed among HP, virus-infected plants (VP-1 or VP-2), and plants inoculated with either B. bassiana (BP) or M. acridum (MP). For each combination, the insects used were either viruliferous or non-viruliferous. For the RWB model, the experiments were also performed with winged and wingless individuals. It was not possible to use winged and wingless individuals for the MTP model, because too few winged individuals emerged on tobacco. Each pairwise comparison between plants was repeated 15-26 and 13-17 times for the MTP and RWB models, respectively. VOC Sampling and Analysis Headspace volatiles from the upper parts of seedlings were collected using a dynamic "pushpull" pump system (Benchtop system: CASS6-MVAS6; Volatile Assay Systems ® , Rensselaer, New York, NY, USA). Each plant treatment was sampled (i.e., HP, VP, BP, and MP) with one blank (pot containing substrate) for each sampling session. Shortly after the sampling period, aerial plant parts were directly excised and weighted to calculate the amount of each VOC under the different plant modalities. After each sampling event, EEPF colonization was verified. Based on the plant model, two different sampling and analysis methods were implemented for VOCs. Tobacco model: Four weeks after transplanting (four-fully expanded leaf stage), VOC sampling from tobacco plants was carried out using the dome and guillotine system, as described by Verheggen et al. [55]. In brief, the aerial part of an individual potted plant was covered by a glass dome (15 cm base-diameter, 15 cm height) placed over a Teflon (Chemours, Wilmington, Delaware) guillotine. All equipment was rinsed using n-hexane >99% (Sigma-Aldrich, St Louis, MO, USA) before each sampling event. The system was set at a constant flow of 350 cc input and 250 cc output. VOCs were trapped for 24 h in a cartridge composed of a thermal desorption tube containing 60 mg Tenax TA ® . The cartridge was first conditioned at 300 °C for 11 h in a thermal conditioner (TC2, Gertsel, Mülheim an der Ruhr, Germany), and was then placed in one of the air pulling outlets from the dome base. After headspace sampling volatile, all cartridges were stored in a fridge at 4 °C for about 1 week before chromatographic analysis. VOC Sampling and Analysis Headspace volatiles from the upper parts of seedlings were collected using a dynamic "push-pull" pump system (Benchtop system: CASS6-MVAS6; Volatile Assay Systems ® , Rensselaer, New York, NY, USA). Each plant treatment was sampled (i.e., HP, VP, BP, and MP) with one blank (pot containing substrate) for each sampling session. Shortly after the sampling period, aerial plant parts were directly excised and weighted to calculate the amount of each VOC under the different plant modalities. After each sampling event, EEPF colonization was verified. Based on the plant model, two different sampling and analysis methods were implemented for VOCs. Tobacco model: Four weeks after transplanting (four-fully expanded leaf stage), VOC sampling from tobacco plants was carried out using the dome and guillotine system, as described by Verheggen et al. [55]. In brief, the aerial part of an individual potted plant was covered by a glass dome (15 cm base-diameter, 15 cm height) placed over a Teflon (Chemours, Wilmington, Delaware) guillotine. All equipment was rinsed using n-hexane >99% (Sigma-Aldrich, St Louis, MO, USA) before each sampling event. The system was set at a constant flow of 350 cc input and 250 cc output. VOCs were trapped for 24 h in a cartridge composed of a thermal desorption tube containing 60 mg Tenax TA ® . The cartridge was first conditioned at 300 • C for 11 h in a thermal conditioner (TC2, Gertsel, Mülheim an der Ruhr, Germany), and was then placed in one of the air pulling outlets from the dome base. After headspace sampling volatile, all cartridges were stored in a fridge at 4 • C for about 1 week before chromatographic analysis. Before the analysis of volatiles by GC-MS, 42.5 ng n-butylbenzene (82 ng·µL −1 ) was spiked on each tube. The volatiles were then thermally desorbed using an automatic Thermal Desorber Unit (TD30R, Shimadzu, Kyoto, Japan) set at 280 • C for 8 min. A split ratio of three was used during the injection. Helium was used as the gas carrier with a flow of 1 mL.min −1 . The cool trap was set at -30 • C. Before injection, the trap was desorbed at 280 • C for 5 min. Samples were then injected into a capillary column (5% phenyl methyl; maximum temperature: 325 • C; length: 30 m; diameter: 250 µm; thickness: 0.25 µm). The temperature program started at 30 • C for 5 min, was then increased by 5 • C·min −1 Insects 2020, 11, 435 6 of 16 up to 220 • C, and was finally increased by 20 • C.min −1 to reach 300 • C. Compounds were identified by comparing their mass spectra with those of the NIST17 database using GCMS Postrun software (GCMSsolution v. 4.50, Shimadzu, Kyoto, Japan). Wheat model: The pots that contained 35 wheat seedlings at the Z16 stage [56] (37 days after sowing) were separately sealed in 4-L glass chambers. The pot was completely wrapped with aluminum foil, to avoid any contamination. The air was cleaned by an activated charcoal filter, and was blown into the glass chamber using a vacuum pump with a constant flow of 650 cc·min −1 for 24 h. The Teflon pipe circuit passed through a sampling cartridge (40 mg HayeSep Q, 80/100 mesh; Supelco, Bellefonte, PA, USA) that was placed at the exit of the glass chamber to trap the headspace volatile compounds released by plants. The cartridges were previously cleaned twice by injecting 150 µL n-hexane. The VOCs were eluted to a vial using 200 µL n-hexane. Eighty-six nanograms of n-butylbenzen diluted in n-hexane were added to each sample as an internal standard (IS). Each time, 150 µL n-hexane with 15 µL SI was sampled as a blank. Then, all vials were kept in the freezer at −80 • C before the chromatographic analysis. Volatile analysis was performed by Gas Chromatography (model 6890) coupled with a Mass Spectrometer system (model 5973) (GC-MS; Agilent Technologies Inc., Santa Clara, CA, USA). An aliquot (1 µL) of each sample was injected in spitless mode. The same column as previously described was used. The temperature program started at 40 • C for 2 min, then increased successively to three following gaps: (1) In each case, a series of n-Alkanes (C7-C30) was injected at the same time to confirm the identification of the library by calculating the retention index (RI). Ri was compared to the theoretical RI from online databases, including PubChem [57], PheroBase [58], and NIST (National Institute of Standards and Technology) [59]. Statistical Analyses We performed a generalized linear model with a Poisson distribution to test insect preference between treatments, which were pairwise compared: (i) HP, (ii) virus infected plants (VP-1 and VP-2), and (iii) EEPF-inoculated plants (BP and MP). The preference of viruliferous (I+) and non-viruliferous (I-) insects was evaluated based by the number of individuals found on the leaves of the tested plants after 60 min. Factors included plant treatment, the virus infectious status of the insect, and the morphology of the insect (for the RWB model only). This analysis was completed in R version 3.6.1 (R Core Team, 2019). Volatile profiles from the plant treatments were compared using a permutational multivariate analysis of variance (perMANOVA) with an Euclidian distance matrix and 999 permutations in the R-package "vegan" [60]. Beforehand, the "betadisper" function was used to check the homoscedasticity. Pairwise comparisons were performed when significant differences were detected. The p-values were adjusted using Bonferroni's method, to avoid type I errors due to multiple analyses. To visualize the spatial distribution of the volatiles collected on different plant treatments, a principal component analysis (PCA) was performed, and plots were generated using the R packages FactoMineR [61] and factoextra [62]. One-way ANOVA was then used to highlight compounds that were impacted in each plant modality. The average amounts of VOCs collected in each plant modality was compared after checking the normality and homogeneity of variance. Tukey's pairwise comparisons were computed when significant differences were obtained. Statistical analyses were completed using Minitab software v. 18 Aphid Preference No bias was observed in the preliminary test. The theoretical insect distribution of 50% for each tested plant was observed, regardless of the status of insect infection (Supplementary Figure S1). Healthy versus virus-infected plants: Assays performed between healthy and virus-infected plants ( Figure 2) showed a cross-preference depending on the infectious state of insects. Regardless of the insect-plant-virus model, wingless and winged I+ preferred HP, while I-mostly migrated to virus-infected plants. Morphology had no impact on insect preference for the RWB model. Aphid Preference No bias was observed in the preliminary test. The theoretical insect distribution of 50% for each tested plant was observed, regardless of the status of insect infection (Supplementary Figure S1). Healthy versus virus-infected plants: Assays performed between healthy and virus-infected plants ( Figure 2) showed a cross-preference depending on the infectious state of insects. Regardless of the insect-plant-virus model, wingless and winged I+ preferred HP, while I-mostly migrated to virusinfected plants. Morphology had no impact on insect preference for the RWB model. (Figure 3). This preference for BP and MP was significantly stronger when wingless I+ were tested compared to I-in both MTP and RWB models, except for wingless R. padi with MP. Compared to wingless individuals, winged R. padi were more significantly attracted to BP. In contrast, winged individuals showed no preference between MP and HP. In both cases, the infection state had no significant impact on the choice of winged individuals. EEPF-inoculated versus healthy plants: Wingless R. padi and M. persicae exhibited a significantly greater response towards EEPF-inoculated plants compared to non-inoculated plants (Figure 3). This preference for BP and MP was significantly stronger when wingless I+ were tested compared to I-in both MTP and RWB models, except for wingless R. padi with MP. Compared to wingless individuals, winged R. padi were more significantly attracted to BP. In contrast, winged individuals showed no preference between MP and HP. In both cases, the infection state had no significant impact on the choice of winged individuals. EEPF-inoculated versus virus-infected plants: Regardless of EEPF strain and the aphid-virus-plant model, viruliferous insects were significantly attracted to EEPF-inoculated plants, while there was no significant difference for non-viruliferous insects (Figure 4). Winged and wingless R. padi showed no significant difference in preference between virus-infected and EEPF-inoculated plants. EEPF-inoculated versus virus-infected plants: Regardless of EEPF strain and the aphid-virus-plant model, viruliferous insects were significantly attracted to EEPF-inoculated plants, while there was no significant difference for non-viruliferous insects (Figure 4). Winged and wingless R. padi showed no significant difference in preference between virus-infected and EEPF-inoculated plants. The first two main components of the PCA (PC1 and PC2) represented 50.3% and 64.6% variation in tobacco and wheat, respectively. PCA distinguished clear clusters between treatments ( Figure 5). In tobacco, PC1 was mainly correlated to compounds that were specifically associated with VP-1, The first two main components of the PCA (PC1 and PC2) represented 50.3% and 64.6% variation in tobacco and wheat, respectively. PCA distinguished clear clusters between treatments ( Figure 5). Discussion Choice tests highlighted contrasting behavioral patterns in host-seeking aphids with respect to their infectious status (I+ or I-) in response to different plant modalities (HPs, VPs and EEPFinoculated). First, the infectious status of aphids clearly influenced their relationship with virusinfected plants: I+ individuals preferred HPs to VPs, while I-individuals preferred VPs to HPs. This finding was consistent with the scientific literature, as it is well-known that insect-borne viruses manipulate their vector [11,13,14,[63][64][65][66]. The "Vector Manipulation Hypothesis" is commonly applied to persistently transmitted viruses [67], such as PLRV and BYDV. This hypothesis suggests that the virus influences its vector to move away from already-infected plants, inducing it to spread and feed on new hosts [67]. For instance, Ingwell et al. (2012) demonstrated that virus-free R. padi preferred BYDV-infected plants; however, after it acquired BYDV during in vitro feeding, it preferred healthy plants [10]. Similar results were obtained by Rajabaskar et al. (2013) with M. persicae and PLRV [11]. Furthermore, EEPF-inoculated plants were more attractive to aphids compared to HPs. This finding was similar to that of Aragon (2016), in which M. persicae was attracted to tomato plants perMANOVA showed volatile profiles differed significantly between treatments, regardless of plant model (F 3,19 = 7.26, p < 0.001 and F 3,19 = 12.88, p < 0.001 for tobacco and wheat, respectively). In tobacco, pairwise comparisons confirmed the difference between HP versus the remaining three conditions (see Supplementary Table S2 for more details). A marginal difference was observed between MP and VP-1 (p = 0.06). In wheat, except for HP versus MP (p = 0.078) and HP versus VP-2 (p = 0.108), all treatments were significantly different. Discussion Choice tests highlighted contrasting behavioral patterns in host-seeking aphids with respect to their infectious status (I+ or I-) in response to different plant modalities (HPs, VPs and EEPF-inoculated). First, the infectious status of aphids clearly influenced their relationship with virus-infected plants: I+ individuals preferred HPs to VPs, while I-individuals preferred VPs to HPs. This finding was consistent with the scientific literature, as it is well-known that insect-borne viruses manipulate their vector [11,13,14,[63][64][65][66]. The "Vector Manipulation Hypothesis" is commonly applied to persistently transmitted viruses [67], such as PLRV and BYDV. This hypothesis suggests that the virus influences its vector to move away from already-infected plants, inducing it to spread and feed on new hosts [67]. For instance, Ingwell et al. (2012) demonstrated that virus-free R. padi preferred BYDV-infected plants; however, after it acquired BYDV during in vitro feeding, it preferred healthy plants [10]. Similar results were obtained by Rajabaskar et al. (2013) with M. persicae and PLRV [11]. Furthermore, EEPF-inoculated plants were more attractive to aphids compared to HPs. This finding was similar to that of Aragon (2016), in which M. persicae was attracted to tomato plants inoculated with B. bassiana in a multi-choice test [38]. However, the current study is the first to demonstrate that, unlike virus-infected plants, the infectious status of insect vectors does not interfere with their host-seeking behavior in response to EEPF-colonised plants. Interestingly, choice tests performed between virus-infected (both VP-1 and VP-2) and EEPF-inoculated plants showed that most I+ preferred EEPF-inoculated plants, whereas I-exhibited no preference, irrespective of the insect-plant model and microbe strain. This finding was consistent with two previous observations in that: (1) I+ individuals were more attracted to EEPF-inoculated plants, due to the concurrent absence of the virus and presence of endophytes in the latter; and (2) I-individuals had no preference, because both tested modalities previously proved to be attractive to them. Thus, EEPF-and virus-inoculated plants are considered to be of equivalent quality to aphids. Finally, winged R. padi showed a similar behavioral pattern to their wingless counterparts. Thus, variation to morphology had no significant influence on their choice. It was not possible to evaluate the effect of morphology on the preference of M. persicae, due to the lack of winged forms. This could be caused by the fact that the insect clone used in this study (MpCh4) was a non-tobacco-specialist [68]. How these differences in behavior were mediated remains an open question. Insect-borne viruses appear to regulate the ability of their vectors to locate a host plant by stimulating their olfactory system [69]. Thus, plant volatiles might have significantly influenced our experimental setup. Consequently, the behavioral tests indicated that: (1) the headspace volatiles from HPs acted as a baseline towards which the aphids (whatever their infectious status) were tuned into by default; (2) some featured compounds (or a specific combination of these) that were emitted on infection in VPs had a contrasting effect (repellent to I+, attractive to I-); and (3) EEPF-inoculated plants emitted active compounds that were different to VPs, because both I+ and I-aphids were attracted to them. Interestingly, the headspace volatiles collected from the different plant modalities were qualitatively and quantitatively different. In particular, aldehydes (including heptanal, octanal, nonanal, and decanal) were more abundant in EEPF-inoculated plants, regardless of strain. These compounds are attractive to aphids, including M. persicae and R. padi [70,71]. Further experiments are required to screen for and accurately validate candidate active compounds that form part of the olfactory signature of each tested modality. Such experiments would include electrophysiological recordings at the antennal level and choice tests with collected or synthetic volatiles. A major perspective of this study is to investigate whether aphid preferences alter the efficiency of virus transmission in different systems. The fact that for instance aphids were attracted to EEPF-colonized plants does not seem beneficial to the host, potentially decreasing fitness and increasing the spread of virus. Investigating the settlement behavior of sap-feeding insects and transmission dynamics on different plant modalities might help to evaluate the potential benefits conferred by endophytes. The most recent reports by González-Mas et al. (2018 and 2019a) showed that, by colonizing melons, B. bassiana altered the feeding behavior of Aphis gossypii (Glover) and significantly reduced inoculation rates by 21.9 and 24.4% for Cucumber mosaic virus and Cucurbit aphid-borne yellow virus, respectively [27,72]. Furthermore, the influence of aphid preferences on their life history and population dynamics, as well as those of their natural enemies, requires investigation. Many studies have already reported the impact of EEPF plant colonization in this regard [26,30,31,39,41,73]. In a multitrophic context, R. padi carrying BYDV was more likely to be parasitized by Aphidius colemani (Viereck) compared to virus-free individuals [74]. González-Mas et al. (2019b) reported that A. gossypii reared on B. bassiana inoculated plants were preferentially consumed by their natural enemy Chrysoperla carnea (Stephens) over those reared on uninoculated plants [75]. Commercial fungal strains, such as those used in the current study, are commonly used in inundative treatments via foliar application [76][77][78]. Their role as endophytes remains poorly understood. Determining their influence on the aphid-borne virus pathosystem could be crucial for developing integrated pest management strategies. Conclusions Our study confirmed that the infectious status of aphids influences their relationship with virus-infected plants following the "Vector Manipulation Hypothesis". This phenomenon was not observed when virus-free plants were compared to EEPF-inoculated plants, especially viruliferous and non-viruliferous aphids, which were both attracted to EEPF-inoculated plants. Thus, our study is the first to demonstrate that the infectious status of insect vectors does not interfere with their host-seeking behavior in response to EEPF-colonized plants. Moreover, non-viruliferous insects showed no preference between virus-infected and EEPF-inoculated plants, possibly because both plant modalities were qualitatively equivalent. Finally, volatilome analysis confirmed that the presence of endophytic entomopathogenic fungi in leaf plant tissues altered the profile of volatiles emitted by the latter. Our findings provide an interesting research perspective on how EEPF contribute to the aphid-borne virus pathosystem.
7,280
2020-07-01T00:00:00.000
[ "Biology", "Environmental Science" ]
A Gaussian-Distributed Quantum Random Number Generator Using Vacuum Shot Noise Among all the methods of extracting randomness, quantum random number generators are promising for their genuine randomness. However, existing quantum random number generator schemes aim at generating sequences with a uniform distribution, which may not meet the requirements of specific applications such as a continuous-variable quantum key distribution system. In this paper, we demonstrate a practical quantum random number generation scheme directly generating Gaussian distributed random sequences based on measuring vacuum shot noise. Particularly, the impact of the sampling device in the practical system is analyzed. Furthermore, a related post-processing method, which maintains the fine distribution and autocorrelation properties of raw data, is exploited to extend the precision of generated Gaussian distributed random numbers to over 20 bits, making the sequences possible to be utilized by the following system with requiring high precision numbers. Finally, the results of normality and randomness tests prove that the generated sequences satisfy Gaussian distribution and can pass the randomness testing well. Introduction Random numbers are of extreme importance for a great range of applications from scientific to engineering fields, including statistical sampling, numerical simulation, lottery and cryptography. A typical example is the quantum key distribution (QKD), in which true random numbers are essential to guarantee its unconditional security [1][2][3][4]. The algorithm-based classical pseudo-random number generators have been widely applied for their simple implementation and extremely high generation rate [5]. However, the inherent determinacy of pseudo-random number generator makes it substantially deterministic and predictable, which leads to the failure in satisfying theoretically requirements of secure communication systems. Aside from the algorithmic method, extracting randomness from objective physical processes is feasible. An outstanding alternative is a quantum random number generator (QRNG), which exploits the intrinsic random nature of quantum mechanics [6,7], acts as a promising method in generating truly random numbers. Practical schemes are proposed under assumptions only if the QRNG model works well, indicating the system is fully trusted. This rigorous condition can hardly be fulfilled, and device-independent (DI) protocols are proposed for closing the loophole. DI QRNG verify the randomness physically, taking the violation of Bell's inequality [37,38] as a judgment [39,40]. Later, two branches are researched for alternative proposes, namely randomness extraction [41][42][43] and randomness amplification [44,45]. While DI protocols sacrifice too much on feasibility, a third choice which compromises between practical scheme and DI protocol is proposed. These semi-DI protocols merely make a reasonable assumption on critical devices [46][47][48][49][50][51], pursuing practical security instead of unconditional security. The application fields of Gaussian RNG are diverse, in which the most significant application is a simulation, ranging from Monte Carlo method to simulation of communication channels and noises, biology, psychology, and so on. Specific to quantum information, Gaussian RNG provides Gaussian distributed random numbers for the modulation of coherent states in continuous-variable QKD systems [4,52,53]. However, all the previous QRNG schemes provide uniformly distributed random numbers. Despite the universality that uniformly distributed random numbers could be converted to any distributions mathematically, the conversion process itself somehow costs much time and resources. An even higher potential risk is the process is that approximate in principle [54], which may lead to the defects of performance in applications. In fact, most of the continuous-variable quantum random sources, owing to the central limit theorem, feature Gaussian distributed signals in the time domain, including vacuum shot noise and phase noise of laser. Hence, it is possible for hardware-based schemes, naturally including QRNG, to utilize the Gaussian distribution profile and directly generate random numbers as required. In this paper, a practical scheme directly generating Gaussian distributed quantum random numbers is proposed. Here "directly" means there are no conversion steps from the uniform distribution to the Gaussian distribution, however, the scheme is not post-processing free. Firstly, we point out the inherent difference in entropy estimation for Gaussian distribution versus uniform distribution. Practical issues of sampling devices are discussed for entropy estimation and system optimization. Secondly, a novel post-processing method is proposed, which takes a step further from the recursive method in classical Gaussian distributed RNG [55]. It is designed to remove the impacts of classical noise in the system, along with fulfilling the precision and auto-correlation requirements from applications. Finally, an experimental setup is demonstrated to show the feasibility of this scheme, using vacuum fluctuation of the quantum state as a quantum random source, and the implementation has passed tests both on normality and randomness. The structure of this article is described as follows. In Section 2, firstly we discuss the difference in entropy estimation between Gaussian and uniform distribution, followed by the analysis on the impacts of practical sampling device to the system, namely sampling range and sampling resolution. In Section 3, a novel post-processing method is proposed to overcome the disadvantage of low precision in sampling, and substantially eliminate the impacts of electronic noise. In Section 4, an experimental setup is demonstrated, as well as the optimization and post-processing operation on a practical system. Finally, the test results for both normality and randomness are shown. Vacuum Fluctuation In principle, most quantum random sources with Gaussian distributed signals in the time domain can be applied in our scheme. Particularly, for the following excellent features, we choose vacuum fluctuation of the quantum state as the random source. Firstly, vacuum shot noise is caused by vacuum fluctuation, thus the randomness of the pure state is secured. Secondly, it is a Gaussian state, which means the measurement of either position or momentum quadraturex orp in a pure state will always follow a Gaussian distribution. Finally, it is identical, which means additional vacuum fluctuation introduced by devices, such as the beam splitter, will not affect the randomness of the quantum source. The Wigner function of vacuum fluctuation is as follows: As a quasi-probability function, one can repeatedly measure eitherx orp quadrature, given fixed phase difference θ between the vacuum and LO signals. Takenx quadrature as an example, the probability density distribution (PDF) of detected signal should be: which is perfectly Gaussian distributed, with mean value µ = 0 and variance σ 2 = 1/2 centering at the origin in phase space. Security is always an important issue to a cryptographic system, including quantum random number generator, compared to its classical counterpart. While there definitely exist some risks of leaking information to an adversary in randomness extraction, modeling of vacuum fluctuation also takes advantage of its property, of which it could never be tampered even by the most powerful adversary under the limitation of physical laws. Hence, unlike traditional applications of classical RNG, where the noises are usually treated as introduced by the system itself, we could regard any noise in the system as introduced by the eavesdropper (Eve) in the QRNG system, in attempt to reach a lower bound in entropy estimation. For homodyne detection, signals of two balanced arms are subtracted to supress the common mode noise, while the amplification factor is decided by the system: whereâ,â † are annihilation and creation operators, andn =ââ † is photon number operator. V samp is the signal at sampling device (after subtraction), A is the amplification factor of the system excluding LO signal, |α LO | indicates the X quadrature of LO signal, and θ is the phase difference between vacuum and LO signal. Entropy Estimation As a conventional scheme, entropy estimation should be done before randomness extraction. The most significant difference between uniform and Gaussian distribution, from the perspective of information theory, is that the information entropy H(X) should have different maximal value under different constraints. In order to eavesdrop most information under classical scenario, Eve's best strategy is figuring out max x∈X p i , the highest probability of a single bin in a random variable X, which directly related to the minimal entropy (min-entropy): where d is the base of logarithmic function that defines whether the signal is binary, decimal and so on. For binary information, we often define d = 2. However, if the signal precision n is more than one bit, it could be also treated as d = 2 n . According to Equation (5), uniform distribution possesses the highest min-entropy with no constraints. Noticing that for the continuous case, the classical entropy H(X) always goes to infinity for ideal sampling device with infinite sampling range and precision. Therefore, we assume the total amount of information a single signal carries is 1, thus normalize the maximal value of information entropy rate. This method is also adopted for Gaussian distribution entropy estimation in the following analysis. Meanwhile, in applications under certain constraints, namely the quadratic quantity of energy (or power) of signal is fixed, one is expecting a different PDF. This conclusion is naturally derived from the property that among all distributions with the same variance σ 2 m , Gaussian distribution possess the highest information entropy H(X): where p(x) is PDF of ideal Gaussian distribution. This property indicates that, if the variance of continuous noise signal σ 2 m is observable and steady, Gaussian distribution, instead of uniform distribution, could achieve a higher entropy. Fortunately, the variance of total noise is indeed measurable in a QRNG scheme, and perfectly matches the assumption. Therefore when adopting a Gaussian distributed random source, and the output is supposed to be Gaussian distributed, the information quantity acquired is significantly reduced during the conversion phase of uniform distributed RNG schemes. To achieve a Gaussian distributed QRNG scheme, we should adopt the goodness of fit (GoF) test essentially, to verify whether the PDF of our samples are sufficiently close to Gaussian distribution. For Gaussian distribution, there are several specific methods, namely Kolmogorov-Smirnov test and Anderson-Darling test, which will be introduced detailed in Appendix A. Impact of Sampling Device In the entropy estimation phase, a similar idea of "worst-case scenario" to its uniform counterpart could be adopted. Alice loses some entropy due to the sampling device, while Eve may acquire original information from ideal Gaussian distribution. As a continuous distribution, either infinite sampling range or precision is not practical, and will cause the entropy to be infinite, hence we should set conditions considering the performance of the practical device. Classical Gaussian RNG often set ±10σ as the bounds in high multiple-sigma test [56], it seems reasonable to follow this assumption. Meanwhile, sampling precision can hardly exceed 20 bits for current commercial analog-to-digital converter (ADC). Practical issues of range, precision and depth will be discussed in detail. Despite the classical noise, our scheme is still a trusted device scheme, where the extractable randomness of the scheme is described as: where R dis refers to the generation rate of a QRNG, with "dis" indicates the discretized samples which may lose some information, and I(A : B) is the mutual information between the authorized users (Alice, Bob) in a cryptographic system. In QRNG scheme, specifically speaking, Alice is the random source and Bob is the randomness extractor. Apparently, QRNG could be (and in most occasions is) local, while Alice plays both roles of the sender (random source) and receiver (randomness extractor), thus I(A : B) is actually determined by the entropy H(A) of measured classical data. Particularly, in the vacuum fluctuation scheme we demonstrated below, the variance of total noise σ 2 m and the variance of classical noise σ 2 c can be observed by separately turn on/off the LO signal. Since we believe the quantum noise Q and classical noise E are independent from each other, the min-entropy of quantum noise is a conditional entropy, with classical noise part E is given by [33]: where M, E are the random variable of total measured noise and electric noise, m, e mean the specific measured value. R, δ means the sampling range and sampling resolution respectively. 2σ q refer to the two possible values that could be the maximal p i in equation (5), and e max is the maximal possible electric noise. Things differ a little under the Gaussian scenario comparing to the analysis in Ref. [33]. In the uniform scenario, the optimization for the system is setting c 1 = c 2 to achieve maximal value of min-entropy. However, if we adopt c 1 = c 2 in the Gaussian scheme, the raw data will definitely fail the GoF test. Therefore, we have to analyze the impact of the sampling device under the restriction of the GoF test, where there always exists c 1 < c 2 . Sampling Range The sampling device will change the instantaneous voltages beneath (above) the lower (upper) threshold into V min (or V max ). Parameter k is the ratio between sampling range R and the deviation of signal σ. Finite sampling range will truncate probability distribution P(x ≥ |kσ|) outside the range ±kσ, another consequence is a significant defect at the tails, causing the PDF non-Gaussian. We define a parameter called normalized min-entropy in our analysis. Supposing an ideal Gaussian distributed random variable, and the information carried by the variable is described as H ideal−min before normalized to 1. When taking practical sampling device into consideration, the distribution is changed and entropy is estimated by Equation (8), however, it should be monitored by the GoF test. In the following analysis of sampling range and resolution, the utmost assumption is that the signal should satisfy normality, meanwhile, this is also the assumption of the post-processing method below. Therefore, the min-entropy of distribution from a practical system should initially pass the GoF test, before it can be normalized according to the ideal case, which could be described as: Figure 1 shows the relationship between sampling range and entropy H norm−min . Cases R ≤ ±3.5σ are discarded for all precisions, due to these cases feature defected PDF and frequently fail the GoF test (with default significance level at α = 0.01). However, lower sampling precision n with too large a sampling range will also fail the GoF test (as the curve n = 12 stops at R = ±4.6σ), since the discretization effect is notably increased for lower precision cases. 1. If k is too small, V min (V max ) will occur too often, making the random variable more predictable, and reducing entropy H dis (X). Furthermore, the worse profile of Gaussian distribution has a higher possibility to fail the GoF test, which does not match our requirement in post-processing and applications; 2. If k is too large, most signals will locate in a small range of sample bins, making the most significant bits (MSB) of samples more predictable, and also reducing entropy H dis (X). On the other hand, many sampling bins are unoccupied, wasting the ability of devices and substantially reduce the sampling precision. For cases that successfully pass the GoF test, the normalized min-entropy decreases as the sampling range increases. The curves do have a period of rapid increase under no constraint assumption within the range of R ≤ ±3σ, however, these cases are rejected by the GoF test. From the view of variance, as long as the raw data pass GoF test, the higher k value is, the lower normalized min-entropy is, which shows great significance on the matching of signals and range of sampling device. Sampling Resolution Finite sampling resolution δ will result in information loss of probability distribution inside the minimal discrete sampling interval, i.e., resolution: Intuitively, entropy grows monotonously as the precision n increases. If n is too small, too many detailed information is lost, and we can hardly extract random numbers after entropy estimation. Figure 2 shows the relationship between precision and entropy H norm−min . For the same reason discussed in sampling range analysis, cases R ≤ ±3.5σ are discarded. Despite precision below n = 12 will frequently fail the GoF test due to a strong discretization effect, we estimate the entropy to show the trend of entropy curve. Sampling Depth Sampling depth (maximal samples in a single buffer) mainly affects the practical system on the GoF test. As the test statistic shows, the AD test is distribution-free, but sample data size L related. An identical distribution with sample length 10 times larger would lead to approximately linearly increased test statistic, while the critical value remains the same. This is due to it possessing a larger sample space, any violation on the PDF becomes more significant to be detected by the GoF test. Therefore, a larger buffer should have a better PDF for raw data to pass the GoF test. According to the three factors discussed above, we consider R = ±4σ to be the optimal sampling range for noise-free cases, while the precision and depth should be as high as possible, which is not so crucial in uniform occasions. It is highly recommended that if one wants to achieve a Gaussian distribution QRNG, sampling precision should be at least 12 bits for decent performances. Although the sampling range of the practical device is often fixed, one can adjust the amplification factor A |α LO | to adapt the range, aiming at achieving better performance. However, noise introduced by the system with a variance of σ 2 c often alters the PDF and optimization condition. If the noise introduced by the system is not crucial enough to change the PDF, the following post-processing method could significantly reduce its influence. Post-Processing Post-processing is an essential part in QRNG scheme. It is adopted to remove the impacts of classical noise in the system as well as the imperfections caused by finite sampling. Most of the post-processing methods can also improve the probability distribution of the raw data. The Toeplitz matrix hashing method [23,57,58] is widely acknowledged as the most effective method in QRNG post-processing. However, the whole method aims at uniform distribution generation [59], hence does not meet our requirement. Here we propose another post-processing method originating from recursive method [55] adopted in classical Gaussian distribution RNG schemes. The recursive method takes the essence of Gaussian distribution that is, the summation of any amount of Gaussian distributed variables is still Gaussian distributed: while the original Gaussian variables X i satisfy X i = N(µ i , σ 2 i ), the output Y should satisfy Y = N(∑ i k i µ i , ∑ i k 2 i σ 2 i ). Traditional central limit theorem (CLT) of non-Gaussian cases is only valid for a large amount of independent identical distributed (i.i.d.) variables. On the contrary, we notice that the recursive method takes merely four elements as Equation (11) shows. By adopting the recursive method, one could avoid the risk that raw data of distinguishable non-Gaussian variables are converted to identical Gaussian distribution. The original transfer matrix T rec is derived from following operations: where c = 1 2 ∑ 4 i=1 a i . Thus we can denote the relationship between input and output vectors A i , A i , as well as the operating matrix T rec ( 1 2 is normalization coefficient): The output of the recursive method possesses the perfect auto-correlation property. However, it cannot extend the precision of a single number. To make full utilization of all significant bits from different raw data in precision extension, adding random numbers from i.i.d. Gaussian distributed variables with different weight is an effective method. The post-processing method includes two steps. Firstly, we utilize the m-MSB (most significant method) as pre-processing. When entropy estimation phase introduced in Section 2 is done, the value m utilized in m-MSB processing is: Then we should adopt an operation that could achieve precision extension based on the matrix in Equation (13). Noticing that since the raw data has passed GoF test, the condition in Equation (10) is satisfied. Assuming X is the original variable from ADC, and we could divide X into groups of Gaussian distributed variables X i , before taking operation as Equation (10) shows. As an example where we divide the raw data into l = 4 groups, consecutive four random numbers x 4i−3 , x 4i−2 , x 4i−1 , x 4i will form a vector A i before operating by the matrix. Particularly in Equation (10), suppose k i = 2 −i , then every adjacent raw data in vector A i shifts only 1 more bit, thus the summation has a precision of n = m + l − 1 bit, while m, n are the precision of variables X i and Y respectively, and l is the number of groups. Combining the analyses above, we modify T rec , adding different weights in the matrix similar to the original method: Thus we can denote (k NC is normalized coefficient): Noticing that, the structure of S rec is very much similar to the original structure of T rec . Both of them share two rows/columns with three positive and one negative element, and others with three negative and one positive. This type of structure is convenient for expansion to a 8 × 8 or even larger size of T rec [55]. For S rec , the expansion method is similar, as long as obeying the rules discussed below. A crucial difference between the original recursive method and our modified method is that, since we introduce different (absolute) value in the operating matrix, the auto-correlation coefficient will not remain flat. Therefore, we can only extract one number from A i (of n = m + l − 1 bit precision, where m is the precision of A i , n is the precision of A i as the final output, and l is the size of S rec ), while in the original case all numbers of A i (of m bit precision) could be extracted. The recursive method post-processing operation can be designed, hence it is definitely more flexible than Equation (10). Utilizing matrix for a precision extension instead of simply adding i.i.d. Gaussian distributed variables have several merits: 1. Elements in the matrix, which are the weights in Equation (10), is not fixed, as long as they obey fundamental rules. For 4 × 4 matrix, each row/column should have 3 (1) positive and 1 (3) negative elements, and the position should not be the same; the absolute value of each row and column should not be the same either. Thus there is a group of S rec with hundreds of possible matrices; 2. The size of the matrix can be designed, which indicates how many raw numbers will be used to generate a final number. We take the 4 × 4 matrix as the simplest example for a demonstration. However, when the precision after m-MSB pre-processing is inadequate, and a larger matrix should be made. For instance, in the following section of implementation, we generate 12-bit Gaussian distribution numbers from 5-bit pre-processed data, by utilizing an 8 × 8 matrix. If the matrix size is larger, it has a potential for even higher precision, such as five-bit pre-processed data with a 16 × 16 matrix will generate 20-bit Gaussian distribution random numbers for high multiple-sigma applications. 3. The values of matrix elements can also be designed, which indicate shifted bits of the pre-processed data. In the discussion above, weights of adjacent numbers always follow the power of 1/2, which means that adjacent numbers in A i should shift one bit in the summation operation. However, if we change 1/2 to 1/4, it means that adjacent numbers in A i should shift two bits. Remember that according to Equation (17), a normalized coefficient k NC should be carefully calculated to match the designation, making sure that the input and output share the same variance. Due to these merits above, one can design his/her own S rec matrix for alternative experimental setup and application requirements. Furthermore, these properties leave huge space for further introduction of pre-generated random seed. It is possible to prepare several operating matrices and, based on the random seed that generated before or even feedback from real-time QRNG scheme, alter the post-processing operation in real-time. Table 1 shows the relative entropy H(p(x)|q(x)) between p(x) and q(x). p(x) is quasi-Gaussian distributed, mixing ideal Gaussian distribution with several types of classical noise of small variance. q(x) is the reference of standard Gaussian distribution. It is clear that the post-processing method dramatically reduces the impact of noise for low Quantum-to-Classical Noise Ratio (QCNR) cases, especially for those noises which a not Gaussian, regardless of the profile of raw data. However, noise is still distinguishable from a standard Gaussian distribution. Table 1. The relative entropy H(p(x)|q(x)) between unknown distribution p(x) (with normalized variance) and reference q(x) after post-processing. All data unit is 10 −5 . H rel = 0 means the unknown distribution is identical with q(x). We assume p(x) a standard Gaussian distribution with minor classical noise of Quantum-to-Classical Noise Ratio (QCNR) ranges between 3-20 dB. In order to highlight the smooth effect on profile, data is designed for small size with n tot = 10M. The residual relative entropy after post-processing is possibly due to the finite size effect of this calculation method. Experimental Setup We experimentally demonstrate our scheme and the setup is described as follows (as shown in Figure 3). The local oscillator (LO) is 1550 nm distributed feedback laser (NKT Basic E15, linewidth 100 Hz) with adjustable output power up to 15 mW, connecting to an external variable optical attenuator (VOA) precisely setting the amplification factor of the LO signal. Vacuum shot noise, physically provided by blocking one input port of a 50:50 beam splitter (BS), interferes with the LO light. The signals are sent to a well-tuned homemade AC coupling homodyne detector (measurement bandwidth limited to 100 MHz by low-pass filter) to measure the noise. Following circuits including an analog-to-digital converter (ADC, ADS5400, sampling frequency 200 MHz, sampling precision 12 bits and input voltage range 1.5 V peak-to-peak), a field-programmable gate array (FPGA, KC705 evaluation board) that realizes randomness extraction and data precision adjustment. The power spectral density function of total noise and classical noise is shown in Figure 4. To obtain better performance, the power of LO light was examined by setting different LO power with fixed steps. When LO light was off, the vacuum fluctuation can be ignored, and classical noise contributes to the output with variance of σ 2 c , which is quite steady. The variance of total noise σ 2 m increases as LO power (after VOA) getting stronger. The linear region ends when LO power increases at around 9.5 mW, and finally saturates at around 13 mW. Our system features a high QCNR to obtain more potential information from the signal. By setting LO power slightly less than saturation at around 12 mW (6 mW for each branch of the balanced detector), we have acquired 12-bit raw data after ADC, and calculate the variance of signals when LO light is on/off, representing the total and classical noise respectively. Noticing that, all the units mentioned here are sampling bins, and according to the ability of our sampling device, one sampling bin roughly equals 0.366 mV. Firstly, due to the fixed sampling range, V range = 2 12 and peak-to-peak value is V p−p = 200, around 3-bit MSB is discarded. Since the variance of total noise is σ 2 m = 1200.7, classical noise is σ 2 c = 82.5, variance of quantum noise can be calculated: σ 2 q = 1118.2, thus the maximal QCNR is defined by: with QCNR = 13.55 (11.3dB), the classical noise after normalization is ε = 1/QCNR = 0.074. As QCNR indicates, classical noise can only fluctuate in a small range of voltage. The MSB part of the residual sample is more likely to be affected by quantum noise, while the LSB part is affected by both quantum and classical noise, which is opposite to uniform distributed occasions. We adopted entropy estimation initially in the post-processing phase. Our ADC has a sampling range of 1.5 V peak-to-peak and sampling precision of 12 bits, thus the quantization error is (δ/12) 2 = 9.3132 × 10 −10 V 2 . While the LO is turned off, measured voltage variance is σ 2 c = 1.11 × 10 −5 V 2 = 82.5δ 2 , and the total measured voltage variance is σ 2 m = 1.61 × 10 −4 V 2 = 1200.7δ 2 . Since the requirement of passing GoF test, the system always works under safety condition c 1 < c 2 in Equation (8), hence the min-entropy is determined by the middle of the distribution, i.e., H min = − log 2 erf δ/2 √ 2σ q = 6.39 bits. Therefore, the rest 12-3-6 = 3-bit LSB is doubtful for security aspect, and its influence should be eliminated by post-processing. To make our scheme more reserved, we keep five bits per signal from the highest non-zero MSB as the pre-processed data in precision extension. Hence, the output has a precision of 5 + (8−1) = 12 bit per signal, while the generation rate by number is 1/8 of the original sampling rate, i.e., 25 M samples per second. We compare our scheme in generation rate with traditional method of uniform distribution QRNG plus inverse CDF conversion post-processing. Under the condition of same implementation settings, namely sampling rate f s , sampling precision n and min-entropy (extractable quantity of randomness) H(x), traditional method can generate f s n raw data, and around f s n · H(x) final data in uniform distribution with estimated entropy H(x), thus the generation rate of k-bit Gaussian distributed number is f s n · H(x)/k. On the other hand, our scheme provide f s raw data, and around f s /4 or f s /8 final data of Gaussian distributed numbers. Considering the practical condition n = 12, H(x) = 0.6 ∼ 0.8 and k = 12 ∼ 32, n · H(x)/k and 1/4 are approximately at the same order of magnitude. Hence, the generation rate of two schemes are leveled, but our scheme has avoided the enormous time cost to calculate the accurate Gaussian distributed value, or space cost to store the huge library of inverse CDF conversion in post-processing [55]. Normality Tests Initially, the random sequences after post-processing should pass the normality test. The fitting result is shown in Figure 5. Random sequences also pass several goodness of fit tests, the test result is shown in Table 2. We calculate the 3σ threshold of bias e(n) and auto-correlation a k (n) under Gaussian distribution. 3σ criterion is a rough threshold indicating the bias e(n) and auto-correlation coefficient a k (n) of a finite sample from ideal random sequence, should only exceed the reference by a probability of = 0.3%. The 3σ criterion originates from the central limit theorem (CLT). The traditional description of CLT indicates that, the summation S n of a large amount of i.i.d. variables {X i } should always have asymtotic behavior to Gaussian distribution: where S n = 1 n ∑ i X i . As long as we can derive the mean value µ and variance σ 2 of certain test statistic, the 3σ threshold is determined. These two statistics can be described as: while for the Gaussian distribution, there exists: Hence the simplification of Equation (20) is: One can easily derive that for Gaussian distribution, bias follows distribution e(n) ∼ N(µ, 1 n σ 2 ), while auto-correlation follows distribution a k (n) ∼ N(0, 1 n ) (and free from k), both of which are normal distribution. We utilize the threshold of 3σ criterion to test our Gaussian distribution QRNG, and the result of auto-correlation coefficient is shown in Figure 6. Sampled Data Threshold Figure 6. Result of 3σ test of auto-correlation |a k (n)| versus delay k = 0 for Gaussian distribution random sequence. Data size is 50M samples, data precision is 12 bit. Threshold is derived as |a th (n)| = 3/ √ n as above (3/ √ n = 4.2 × 10 −4 ). In addition, since no test suites for Gaussian distributed random numbers are proposed, we converted some random sequences into uniform distribution for randomness test. The conversion is done by CDF method discussed in Appendix B, and result is shown in Table 3. Conclusions We proposed a QRNG scheme generating random numbers with a Gaussian distribution based on vacuum fluctuation of a quantum state, a theoretically proved Gaussian distributed random source. We analyzed the impacts of practical issues in the QRNG system, including sampling range, resolution and depth of the sampling device, along with the optimization method. A novel and flexible post-processing method is proposed, inspired from the classical RNG scheme, to extend the precision of a single number to 12, or even over 20 bits, where the property of Gaussian distributed PDF and the auto-correlation coefficient is maintained at the cost of generation rate. The generated random sequence simultaneously pass normality test focusing on distribution, as well as widely acknowledged NIST-STS test suite of randomness (after converted to uniformly distributed sequences). We experimentally demonstrated the scheme based on vacuum shot noise with conventional devices at a generation rate 25M of sample per second. Our scheme takes advantage of the Gaussian distributed profile of quantum random sources. Impacts of practical issues, strictly monitored by the GoF test, could not essentially alter the profile of Gaussian distribution, and consequently eliminated by the designed fast post-processing method. We provide a novel method generating Gaussian distribution random numbers effectively. We have to admit that, despite other QRNG schemes would face consumption in uniform-Gaussian conversion procedure, the generation rate in our system is questionably inadequate for a practical continuous-variable QKD system [52,53]. However, we demonstrate the feasibility of such kind of QRNG, and two factors limiting the generation rate, both of which have huge space to improve. Firstly, the frequency in our system is quite low, due to the limitation of detector bandwidth in our system. By using a balanced detector and ADC with higher bandwidth, the generation rate can be further improved by at least one order of magnitude. Secondly, despite the amplification, the amplified vacuum fluctuation is still too small compared with the sampling range, thus fails to make full use of the ADC. Security is another issue that is extremely significant to the QRNG system. Despite that, we have estimated the min-entropy in the trusted device scenario and operate accordingly, it is not totally clear whether the MSB method in the post-processing phase eliminates the classical noise substantially. The security issue of Gaussian-distributed schemes needs further discussion, and we are keen on tracing related works. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: Appendix A. Goodness of Fit Tests Kolmogorov-Smirnov(KS) test is the primitive one-parameter GoF test for Gaussian distribution [60]. KS test describe the closeness by distance: where Y i is the sample data in ascending order, F(·) is the CDF, N is the size of sample. The final test statistic is the maximal distance among D i : D = max D i , with significance level of α = 0.01. Unfortunately, Gaussian distributed QRNG should pay extremely high attention to the tails of PDF, due to large amount of information they bring, of which KS test can not indicate properly. Therefore, a modified version of GoF, the Anderson-Darling(AD) test is proposed [61]: where A 2 is the test statistic. AD test has a critical value that only relates to α and distribution-free. In practice, we choose the AD test over its alternatives (KS test as well as other GoF tests, namely Jarque-Bela, Lilliefors tests, etc.) as the main GoF test, for its convenience, universality and high sensitivity towards profile [62].
8,302
2020-06-01T00:00:00.000
[ "Physics" ]
Analysis of Value Creation Process of Platform-driven Scientific and Technological Innovation Value Chain: the Perspective of Informationization At present, the innovation environment of science and technology innovation platform is constantly changing, which has gradually changed from the situation of science and technology innovation among single elements to the information comparison among the whole innovation value chain. In this paper, science and technology innovation platform is divided into four types: pure research platform, industrial technology research platform, business incubation platform and public service platform. It deeply analyzes the informationization process of innovation value chain driven by science and technology innovation platform, including basic research, applied research, experimental development and commercialization. By analyzing the role of knowledge capital, technology capital, venture capital and other information resources in science and technology innovation platform, it explores the value flow among platforms. It is found that the stock capital and variable capital of scientific and technological innovation are not a transformation, but realize the increment and cyclic accumulation of income along the innovation chain step by step. The research expands the analysis scope of informatization theory, combs the value creation process of science and technology innovation platform, and contributes to risk avoidance and benefit distribution. INTRODUCTION With the continuous improvement of China's scientific and technological innovation capability, the international influence of basic research and applied research has been greatly strengthened, and the construction of scientific and technological innovation platform is constantly improving. Since the 13th Five-Year Plan, China has entered a decisive period of building a well-off society in an allround way and entering the ranks of innovative countries. In this critical period of building the world into a powerful country in science and technology, it is necessary to implement the strategy of innovation-driven development in depth and deepen the reform of science and technology system in an all-round way to enhance scientific and technological innovation. The innovation environment is constantly changing, which has gradually changed from the technological innovation situation among individual elements to the comparison among the whole innovation value chain, with each link in the chain progressing step by step to promote value increment. The Fifth Plenary Session of the 19th CPC Central Committee pointed out that the focus of self-reliance of science and technology lies in breaking through key core technologies, effectively solving the connection between the "first mile" of knowledge innovation and the "last mile" of technological innovation and commercial application, and opening up the value chain of scientific and technological innovation. The construction of scientific and technological innovation platform, bearing the important mission of R&D, application and marketization, is an important measure of China's innovation construction, and plays an important role in improving national innovation capability and opening up the value chain of scientific and technological innovation. At present, the research on science and technology innovation platform is more about the concept of science and technology innovation platform [2], the analysis of the main body of science and technology innovation platform construction [3][4], and the research on innovation efficiency and evaluation system of science and technology innovation platform [5][6], but less about the combination of its classification and value creation. The research on innovation value chain usually focuses on "innovation efficiency" [7], while ignoring "value creation process" [8], ignoring the internal structure of the value chain and becoming a "black box". The innovation value chain highlights the structure and complexity of the process of commercializing knowledge, and emphasizes the role of skills, capital investment and other resources of enterprises in the process of value creation, which is a multi-stage and multi-input process [9]. Therefore, it is of great theoretical significance to study the value creation process of scientific and technological innovation platform under the innovation value chain. This study classifies science and technology innovation platforms as a whole, and on the basis of revealing the value chain and value creation process of science and technology innovation platforms, clarifies the value-added process of different types of science and technology innovation platforms, and clarifies the process of knowledge creation and diffusion, which is conducive to risk avoidance and benefit distribution, thus opening the "black box". The DEFINITION AND CLASSIFICATION OF SCIENCE AND TECHNOLOGY INNOVATION PLATFORM The concept of "father" of science and technology innovation platform is the platform. The word "Platform" was first proposed by Henry Ford in Modern Man, which was applied in engineering field. Later, Meyer and other scholars discussed it deeply, and pointed out that platform is a collection of product chain shared data including process, personnel, parts and knowledge, which can be divided into technology platform and product platform [10]. These platforms have basically possessed the functions and functions of innovation platforms, which can not only provide support for research and development, but also help enterprises to formulate core competitive strategies [11]. In 1998, the American Competitiveness Commission pointed out in the report "Going Global: New Situation of American Innovation" that the connotation of innovation platform mainly includes three aspects: First, providing infrastructure and essential innovation resources for innovation activities in the process of innovation, including human resources and knowledge resources; Second, provide necessary conditions for the transformation of innovation achievements, including basic services such as financing and auditing; Three, to provide intellectual property protection and market access for innovation investors to recover their innovation investment. Facing the fierce international competition in science and technology, countries all over the world have made it an important strategic decision to build a first-class scientific and technological innovation platform. Max Planck Institute in Germany, scientific and technological innovation service platform in Ireland and product innovation platform in India are all different practice forms of scientific and technological innovation platform construction [12]. In 2002, China's Ministry of Science and Technology put forward the idea of "a large platform for science and technology" to solve the problem of weak basic conditions for science and technology; In 2006, the Outline of the National Medium-and Long-Term Science and Technology Development Plan (2006-2020) once again emphasized the importance of scientific and technological infrastructure, and a large number of scientific and technological innovation platforms entered the development period to jointly promote the process of scientific and technological innovation; The 14 th Five-Year Plan and the Outline of Long-term Goals in 2035 in 2021 pointed out that the scientific and technological innovation platform provides reliable support for the high-quality development of science and technology, and it must be based on local conditions and look at the overall situation to realize the effective convergence of innovation chain. Nowadays, the construction of scientific and technological innovation platform has gradually formed a scientific and technological innovation platform system, which is based on R&D platform, supported by public service platform and focused on industrialization platform, including basic research, applied research, experimental development and commercialization [13]. At the same time, great changes have taken place in the relationship between basic research and applied research. At first, Bacon proposed that scientific research is a single linear model from basic research to applied research, which is irreversible [14]. In 1945, Bush reinforced this statement in the report Science-Endless Frontier, pointing out that there is an irreconcilable contradiction between basic research and applied research, and applied research always excludes basic research [15]. In the following 40 years, the linear model has been respected by scholars. Until 1990s, American scholar Stokes pointed out that Bush's separation of basic research and applied research was incomplete, ignoring the two-way interaction between basic research and applied research and the diversity of industrial development [16], and according to the attributes of knowledge and application, scientific research was divided into pure basic research, basic research caused by application, pure applied research, skill training and experience arrangement, and scholars had a new understanding of the relationship between them [17]. Based on this technology innovation platform can be divided into pure research platform, industrial technology research platform, business incubation platform and public service platform ( Figure 1). The pure research platform mainly carries out basic research and provides theoretical support and knowledge supply for the commercialization and industrialization institutions; Industrial technology research platform is connected with both market and pure research platform, mainly engaged in basic research and applied research for the purpose of commercialization; The business incubation platform conducts pure application research, uses existing basic theories to solve practical problems and incubate hightech enterprises; Public service platform integrates scientific and technological resources and provides public services for scientific research and transformation of scientific and technological achievements. The Evolution of Innovation Chain The innovation defined by the early innovation theory begins with the emergence of new technology. Schumpeter believes that innovation is realized in enterprises, new production factors and production conditions are recombined into the production system, and excess profits are realized by developing new products and opening up new markets [18]. Later, Freeman extended the concept of innovation to the creation and diffusion of inventions, and the first introduction of new inventions into commerce was innovation [19],. In their definitions, innovation begins with invention, and invention begins with technological innovation. However, scientific and technological innovation has undergone revolutionary changes,Scholars [20][21][22] have found that technological progress comes more from knowledge innovation, and the source of innovation has expanded from applied research (technological innovation) to basic research (knowledge innovation),The discovery of new knowledge theory directly or indirectly promotes technological innovation. The speed of scientific and technological achievements from discovery and invention to commercial application is accelerating, and knowledge breakthroughs in the fields of new energy and biotechnology can be rapidly transformed into new technologies to realize industrial development. General Secretary Xi Jinping pointed out that the transformation from "science" to "technology" has become the main feature of the global scientific and technological revolution,Scientific and technological innovation cannot only stay in the laboratory, but must take the industrialization of major basic research results as the ultimate goal to promote economic development; Only by deploying ahead of time and grasping the three core technologies, such as basic technology, asymmetric technology, cutting-edge technology and disruptive technology, can we realize the transformation of running and leading [23]. This fully shows that the core technology of scientific and technological innovation comes from the basic research stage, and the source of innovation has moved forward to the basic research stage. The difficulty of industrial innovation has changed from technological problems to major scientific problems. This change breaks the boundary between basic research (knowledge innovation) and applied research (technology innovation), and promotes their close connection and integration. Only when the ideas generated in the stage of knowledge innovation enter the innovation system and combine with the stage of technological innovation can the original and subversive technologies be generated to promote the innovation-driven strategy. Similarly, the end point of scientific and technological innovation has also changed. It is generally believed that as long as there are new patents or inventions, innovation will be realized [24]. However, combined with the current situation of scientific and technological development and the strategic deployment driven by innovation in China, it is far from enough to have new technologies for innovation,Only when new technologies are applied and successfully entered the market to complete commercialization can the innovation stage be fully realized. The above analysis shows that the process of science and technology innovation chain should start from basic research, that is, knowledge innovation, and complete the commercialization of scientific and technological achievements. This process mainly includes three stages: the upstream stage, the link of basic research, which is the source of applied research; The midstream stage, that is, the applied research stage, transforms the results of basic research into new technologies; The downstream stage can also be called the realization of innovation value,New technologies are applied to products to form new products and push them to the market to complete the commercialization of innovation results. The Positioning of Science and Technology Innovation Platform in Innovation Chain In the innovation chain of science and technology, the innovation content of each link is different. In the upstream stage, that is, the basic research link, basic scientific and technological innovation is carried out,This link is played by pure research platform, resulting in many new scientific discoveries, which either overturn or improve the existing method theory, change the relationship between technical components, and have an impact on enterprises and markets at the same time. The midstream stage, that is, the applied research link, carries out applied scientific and technological innovation, which is realized by the industrial technology research platform. The innovation in this stage is based on the innovation achievements in the previous stage and the existing theoretical basis, and produces new technologies or upgrades of existing technologies. The change of technology further affects the production base of products, changes the production process of products and produces new products. Of course, new discoveries not only produce one technology, but may bring about many technological advances. After the end of applied innovation, scientific and technological innovation entered the downstream stage, that is, experimental development and commercialization, which realized the combination of scientific and technological innovation and industrial innovation. New discoveries and technologies can only achieve their mission if they are applied to products and successfully pushed to the market and customers win profits, otherwise it is meaningless to stay in the laboratory. The definition of these three stages is basically the same as Clark's threestage innovation,When new achievements are produced in basic innovation, the subsequent innovation process will change accordingly, and new technology and business innovation will be started [25]. Although the source of scientific and technological innovation extends to basic research, the role of applied research has not weakened at all. At this stage, the innovation achievements change from invisible to tangible, from ideas to reality, and at the same time, new technologies and ideas are produced, which lays the foundation for the formation of new products and processes. In addition, basic research has injected new vitality into the follow-up scientific and technological innovation, and the government, universities and research institutes that conduct basic research have also entered the innovation chain of science and technology, which has improved the efficiency of scientific and technological innovation. Prior to this, there were two main types of scientific and technological innovation platforms engaged in applied research: one was that enterprises built their own laboratories and developed technologies that met the market and corporate profits according to consumer needs; The other is a professional applied research institution specializing in the development of new technologies. There are specializations in the industry, and these two platforms are mainly aimed at technological innovation, while there is a slight lack of stamina for knowledge innovation. The extension of the source of innovation makes universities and scientists enter the chain of technological innovation and R&D, and enterprises in the innovation system are also in the R&D link,The cooperation between the two sides has formed the trend of collaborative innovation of production, education and research [26]. The downstream stage is the key stage of scientific and technological innovation, and the success rate of technological innovation achievement transformation determines the success or failure of innovation,Relying on the business incubation platform, technological achievements are continuously transformed to promote the commercialization of scientific and technological achievements. After entering the stage of experimental development, enterprises still need to continue to innovate in order to fully realize the value of new technologies,At this time, innovation is generally improved innovation combined with market demand. The commercialization of innovation results is mainly measured by whether it can be accepted by the market and obtain profits [9]. Generally speaking, the new technology developed in the applied research link will be directly adopted by the cooperative enterprises and form products or services, and finally realize profits through sales; Technologies that are not needed by cooperative enterprises can be adopted by other enterprises through technology transfer or transaction [22]. In addition, under the current environment of encouraging researchers to start businesses, it is also a commercial way that new technologies are used by researchers to start businesses and form new products. Only the success of this stage marks the success of the whole innovation chain. In the process of innovation activities, different types of scientific and technological innovation platforms play different roles. Based on the above analysis, we can find that in the basic research stage, that is, the knowledge innovation stage, the pure research platform mainly plays a role in producing the knowledge needed for innovation activities and seeking theoretical breakthroughs; Industrial technology research platform connects knowledge and market, mainly engaged in applied research, and at the same time produces basic research caused by application, which serves the middle reaches of scientific and technological innovation, that is, the stage of technological innovation; The downstream stage mainly involves business model innovation and market innovation, and it needs enterprises to realize the transformation of applied research results into commodities, at which time the business incubation platform plays a role (Figure 2). The knowledge innovation of pure research platform lays the foundation for applied research, and at the same time, applied research will also produce new knowledge. In this process, public service platform provides scientific and technological resources for scientific research and promotes the transformation of scientific and technological achievements. Public service platform and business incubation platform are catalysts for the continuous development of scientific and technological innovation activities. SCIENCE AND TECHNOLOGY INNOVATION PLATFORM'S VALUE-ADDED ROLE IN THE INNOVATION VALUE CHAIN The concept of value chain was first put forward by Porter of Harvard Business School in his book "Competitive Advantage": value chain is used for enterprises to analyze the source of internal competitive advantage, and it is the sum of value activities for enterprises to realize value increment through a series of actions [27]. The existence of value chain divides the strategic activities of enterprises into different but interrelated units. Porter's value chain structure mainly includes basic activities (direct value-added) and auxiliary activities (providing support without creating value by itself). Basic activities include internal and external logistics, production, marketing and service, which are the production process of enterprises and can directly bring value added; Auxiliary activities include purchasing, technology development, personnel management and infrastructure management, which do not directly generate value, but increase value by providing support for basic activities. Differences in value chains among different enterprises form different competitive advantages. It can be seen that Porter's value chain is composed of the production and operation links of the company and the activities in which various resources are put into operation, with the production process as the main line and other activities providing support. Therefore, based on Porter's thought, applying it to the field of scientific and technological innovation can form the value chain of scientific and technological innovation. In the value chain of scientific and technological innovation, basic research, applied research, experimental development and commercialization constitute basic activities, which are the basic process of realizing innovation and can directly create value and realize value appreciation; Auxiliary activities are activities that invest resources in the process of innovation, ensuring the continuous innovation activities, including human resources management, infrastructure investment (scientific and technological resources), policy support and financial management,These activities can not directly create value, but can realize value increment by providing help for basic activities. In the past, there were two main ways of innovation activities: first, professional research institutions conducted technical research; Second, enterprises set up laboratories to carry out research and development activities by themselves. This innovative way ignores the importance of basic research, resulting in the break of innovation value chain and poor connection of each link. The total value of innovation results is the result of the whole innovation value chain, but the innovation value chain contains many links, and each link has the formation of innovation value. Innovation process is a process of multiple inputs and multiple outputs, so there are differences in innovation input and innovation income at each stage [21]. Innovation Value Chain Upstream: Basic Research Stage In the stage of basic research, the research and development objectives are quite different from those of enterprises, and they mainly pursue the improvement of scientific and technological level and the breakthrough of scientific theory,At this time, universities, research institutions and scientists play a major role. Therefore, for the pure research platform, its various inputs are for the continuous scientific research to produce new discoveries and creativity. Therefore, it is necessary to fully guarantee the support of research funds and scientific and technological resources,In addition, the government also needs to provide relevant policy encouragement to attract excellent researchers to enter the platform. In this link, the innovation income can not be simply measured by money, but has great social nature, which is embodied in topics, papers and scientific and technological works,The value of these achievements includes the value of supporting activities and the value of knowledge needed for basic research, which will be transmitted down with the innovation value chain. Innovation Value Chain Midstream: Applied Research Stage The application research stage is mainly completed by the industrial technology research platform. Industrial technology research platform connects the market and pure research platform, and the construction subjects are diverse, including government, enterprises, research institutes and universities. At this stage, the investment in innovation has changed,In addition to the additional investment in supporting activities and the provision of manpower, material resources and financial resources, the results of basic research also need to be put into it for technological innovation. The discovery and creativity produced by basic research is of great value, and its value cannot enter the technological innovation at one time, but generally enters the applied research stage in several parts. According to Marx's value formula, the value of products consists of variable capital, constant capital and surplus value (W=c+v+m) [28]. In innovation activities, fixed capital generally refers to capital investment and material resources investment; Variable capital consists of researchers engaged in research and development activities; Surplus value is expressed as the value of new inventions and discoveries, which together constitute the value of innovative achievements. Of course, the value of innovation achievements of pure research platform is potential, and the industrial technology research platform connects the market and basic research, and strives to realize the market value of innovation achievements and realize the value increment of innovation achievements. Downstream of Innovation Value Chain: Experimental Development and Commercialization Stage After entering the downstream stage of innovation value chain, the output of upstream and midstream enters the value system of innovation achievements as new input, which realizes value through transfer on the one hand and product on the other. Technology transfer is mainly manifested in the realization of its value in the form of remuneration of intellectual property rights, and the acquisition of the right to use new technologies through transactions or one-time buyout. For untransferred technologies, they will enter the business incubation platform and continue to add value. Entrepreneurship incubation platform provides excellent environment and conditions for the transformation of scientific and technological achievements and innovation of scientific and technological enterprises, including not only capital and hardware facilities, but also information resources, policies, laws and other software support, helping enterprises to apply new technologies and realize their value. In this process, the newly invested capital is added to the innovation value chain system, which not only realizes the potential value of the innovation achievements in the upstream stage, but also realizes commercialization by continuously running in with the market. Of course, to realize commercialization, we should not only improve and revise the products, but also adjust the business model to help the products be accepted by the market. These behaviors directly and indirectly promote the value increment of innovation achievements. In addition, the state strongly supports scientific and technological personnel to start their own businesses, and scientific research personnel start their own businesses with their own innovation achievements, which is also a way to realize the commercialization of innovation achievements. Scientific and technical personnel can effectively combine market demand with R&D demand and improve the application value of innovation achievements. For such entrepreneurs, there will be venture capitalists to invest and help them start businesses. This way can increase the value of innovation achievements, and at the same time, it should bear the risk of starting a business. Generally speaking, the closer to the market, the smaller the potential benefits and the more intense the competition; The farther away from the market, the smoother the competition, and the greater the potential benefits [29]. In addition to realizing the potential value of innovative achievements, the participation of venture capitalists will also expand its market value to ensure the success of entrepreneurship. These behaviors will increase the value of innovation achievements. Throughout innovation chain, public service platforms provide all kinds of resources needed for innovation activities, and several kinds of scientific and technological innovation platforms cooperate with each other to promote innovation activities to continue. However, the value of innovation achievements can not be simply understood as the sum of manpower, material resources and financial resources on the platform, but also includes the total value of multi-stage outputs and various innovative behaviors (Figure 3). The development of each link needs the promotion of corresponding capital,The key to the entry of basic research into applied research lies in the transformation of knowledge innovation achievements,At this time, the role of knowledge capital is the prerequisite for technological innovation; When applied research enters the downstream link of experimental development and commercialization, the key lies in technology adoption, business model innovation and market innovation combined with market demand,At this time, financial capital and human capital of enterprises play a huge role. Whether it is knowledge innovation in the early stage, business innovation in the later stage, or marketing, it is aimed at maximizing the value of innovation achievements, and the value of innovation achievements is constantly superimposed in this process, finally realizing value-added. At the same time, the social and economic benefits brought by the flow of scientific and technological achievements through the chain shorten the distance between science and technology and economy, enhance the closeness of each link, stimulate new social and market demands and start a new round of innovation. ANALYSIS OF VALUE FLOW BETWEEN SCIENTIFIC AND TECHNOLOGICAL INNOVATION PLATFORMS Commercialization of scientific and technological innovation achievements is a comprehensive system with multiple subjects and complex processes. The traditional transformation process generally consists of technology development, technology popularization and application,Science and technology intermediaries transfer the achievements of universities and research institutes to enterprises for further promotion. This way makes the information asymmetry between universities, research institutes and enterprises, affects the transformation of scientific and technological achievements and enterprise innovation, and breaks technology and demand [30]. Innovation value chain makes each stage of scientific and technological innovation interlocking, and each link can directly carry out achievement transfer and demand docking through scientific and technological innovation platform, and each stage and participants have their own value pursuit,Therefore, the cooperation between scientific and technological innovation platforms and the mutual benefit and win-win between production, education and research are of great significance to the ultimate success of innovation activities. The key of innovation activities in the value chain lies in the transition of each stage and the promotion of capital. Pure research platform mainly provides knowledge capital for innovation value chain, which is the source of innovation activities and the driving force of economic growth. The new growth theory regards knowledge as an independent variable in the growth model, and the theory holds that the economic growth brought by knowledge lies not only in its own valueadded, but also in the fact that knowledge can spread to other factors such as capital and manpower to promote the increase of income [31]. Knowledge is the source of promoting technological progress, and its value will flow along the innovation chain to the link of technological innovation. In the process of transferring knowledge resources between pure research platform and industrial technology research platform, the goal of knowledge innovation will be adjusted according to the feedback of technological innovation stage. Of course, not all knowledge resources used for technological innovation are newly created,Pure research platform and public service platform continuously transfer value to obtain existing knowledge capital, while public service platform will also bring new knowledge generated by pure research platform into the database to constantly update resource reserves,This process does not generate value increment, but simply exchange value. The task of knowledge innovation is mainly completed by universities and research institutes. As the center of knowledge gathering and talent training in China, colleges and universities have a strong subject knowledge base and talents with these knowledge, which can realize the "point-to-point" connection between research content and professional disciplines,After entering the R&D link, they have formed a situation of collaborative innovation of production, education and research, which constitutes an innovation system for enterprises, governments and academia to communicate with each other [32]. Scientific research institutes accurately grasp the direction of China's scientific and technological innovation, and have strong knowledge sensitivity, keenness and abundant scientific and technological resources to effectively promote innovation. Knowledge capital generated by pure research platform will not enter the stage of technological innovation at one time, but will play a role in innovation as stock capital for many times. After the intellectual capital entered the applied research stage, the industrial technology research platform began to play its role. In the innovation value chain, the industrial technology research platform is the most dynamic, derivative and plastic key node. A number of innovative product series and innovative knowledge can be derived from the industrial technology research platform, forming a tree-like branching innovation value chain network system [33]. After undertaking the upstream knowledge capital, we began to carry out technological innovation. It should be pointed out that not only the knowledge capital is spillover, but also the new technology produced is spillover,Moreover, the incubation technology belongs to venture capital and may not succeed, so personal investment is often unable to bear it. Therefore, the government has the responsibility to participate in the process of knowledge innovation and technological innovation, and provide policy support, capital support and other resources to promote technological innovation. Of course, government investment is only a guiding investment and cannot replace enterprises. Because the industrial technology research platform should consider the market demand and the executive power of enterprises on the one hand, and the advancement and reality of knowledge capital on the other hand, it needs the participation of universities, research institutes and enterprises. And each subject on the platform will obtain the required resources in the innovation activities according to their own comparative advantages, effectively avoiding the disconnection between economy and science and technology [34]. These inputs will eventually be reflected in the new technologies produced, and some of the value will flow to the next link along the innovation chain, while some of the value will flow back to the pure research platform and the business incubation platform, which will generate information feedback. When the value of innovation achievements flows to the downstream stage, the business incubation platform is added as a catalyst to promote scientific and technological innovation. New knowledge capital and technology capital attract investment to carry out new projects or start businesses in science and technology. At this stage, scientific and technological innovation should not only continue to carry out innovative activities, but also consider the influence of market activities, business models and enterprise capital. Besides material capital, human capital in enterprises is the key to the success of innovation. The final realization of commercialization is measured by the sales revenue of new products, so the management, innovation and sales talents largely determine the success or failure of commercialization of innovative achievements [35]. Enterprises in the business incubation platform will also improve and innovate the achievements of the industrial technology research platform to adapt to the enterprises and the market to realize their potential value. In this link, innovation and entrepreneurship become an organic whole, the overall value of the enterprise is improved, and the value of innovation achievements is added. At this point, the commercialization of scientific and technological innovation is completed, and the new market reaction and achievement information will be fed back to the starting point of the innovation value chain, starting a new round of innovation. CONCLUSION Innovation value chain is an aggregate of organizational node chains, which regards the process from the source of scientific and technological innovation to large-scale development and finally commercialization as diversified innovation subjects with their own needs and interests as a link through the scientific and technological innovation platform [33]. Its particularity has the effect of increasing returns and circulating accumulation on innovation subject, innovation environment and policy support [36]. In this process, the value flow of each link has corresponding value pursuit and function. The value of innovation achievements flows positively along innovation chain, promoting the transformation of scientific and technological achievements and accelerating the realization of social and market needs; Reverse flow will feed back the demand of the next link to the upstream, making the upstream R&D innovation more targeted and more suitable for the demand. At the same time, the value flow in the chain is not single, but can flow across links and platforms. The downstream information can be directly transmitted to the pure research platform across the industrial technology research platform, thus realizing the network of value transmission and effectively avoiding the errors caused by the linear transmission of information. However, the market is changing rapidly, and the construction of scientific and technological innovation platform is still in the process of continuous exploration. The value chain of scientific and technological innovation has no fixed structure, and the quantitative evaluation of its value transfer needs systematic observation and follow-up research. Whether the value creation process of science and technology innovation value chain mentioned in this paper can be applied to the future development of science and technology innovation needs more case analysis and further empirical research to test and discuss.
7,912.8
2021-01-01T00:00:00.000
[ "Business", "Engineering", "Economics", "Computer Science" ]
Design and Testing of a Compliant ZTTΘ Positional Adjustment System with Hybrid Amplification This article presents the design, analysis, and prototype testing of a four-degrees-of-freedom (4-DoFs) spatial pose adjustment system (SPAS) that achieves high-precision positioning with 4-DoFs (Z/Tip/Tilt/Θ). The system employs a piezoelectric-driven amplification mechanism that combines a bridge lever hybrid amplification mechanism, a double four-bar guide mechanism, and a multi-level lever symmetric rotation mechanism. By integrating these mechanisms, the system achieves low coupling, high stiffness, and wide stroke range. Analytical modeling and finite element analysis are employed to optimize geometric parameters. A prototype is fabricated, and its performance is verified through testing. The results indicate that the Z-direction feed microstroke is 327.37 μm, the yaw motion angle around the X and Y axes is 3.462 mrad, and the rotation motion angle around the Z axis is 12.684 mrad. The x-axis and y-axis motion magnification ratio can reach 7.43. Closed-loop decoupling control experiments for multiple-input-multiple-outputs (MIMO) systems using inverse kinematics and proportional-integral-derivative feedback controllers were conducted. The results show that the Z-direction positioning accuracy is ±100 nm, the X and Y axis yaw motion accuracy is ±2 μrad, and the Z-axis rotation accuracy is ±25 μrad. Due to the ZTTΘ mechanism, the design proved to be feasible and advantageous, demonstrating its potential for precision machining and micro-nano manipulation. Introduction With the rapid development of the modern manufacturing industry, the accuracy requirements for ultra-precision machining and inspection are getting higher and higher.Submicron and even nanometer-level accuracy alignment systems are essential for various applications, such as wafer-level manufacturing and inspection, mass transfer and packaging of microminiaturized semiconductor products, robotic micro/nanomanipulations, and precision optical device polishing, etc. [1][2][3][4].However, traditional high-precision motion systems with 3-DoFs (XYZ) fail to account for pitch, yaw, and small angular deviations [5].These errors, at the spatial scale, pose critical challenges for the increasing miniaturization and integration of devices [6][7][8].Moreover, relying solely on traditional flexure-based nanopositioning stages can provide submicron or nanometer accuracy, but it is hard to meet the large travel range requirements [9].Therefore, how to develop a large-stroke nanopositioning system with multi-DoF levelling and deviation and compensation control function is critical to achieve the above advanced submicron or even nanoscale manufacturing. To satisfy the need for ultra-high precision spatial pose adjustments, a few of motion systems have been proposed.According to the number of drive modules, these systems can be divided into two categories: (1) Three-Branch Chain Parallel Systems: Three points determine a plane, and the plane's position can be precisely controlled through the parallel connection of three chains. Although this system has a simple structure, the accuracy of a single-branched chain directly impacts the overall performance, necessitating the sacrifice of motion stroke (only a few tens of micrometers) to achieve high-precision single-branched chains [10][11][12][13][14]. (2) Multi-Branch Chain Differential Systems: Two or more non-intersecting lines form a surface.By manipulating two points that move independently along a straight line, the position of the line and the surface can be adjusted at intervals.This approach enhances accuracy by allowing for controllable differential and enables greater movement of the branch (up to centimeter level).However, the presence of redundant points makes the motion logic and mechanical structure more complicated, which can result in poor stability of the motion stage [15][16][17][18][19]. A lot of research on multi-degree-of-freedom motion stages has been carried out by previous researchers.In the design of stages, Zhang proposed an ingenious sinusoidal corrugated flexure linkage design, featuring structural symmetry and independent planar motion guidance for the two axes.With a stroke of approximately 130 µm per axis and maximum cross-talk below 2.5%, and a natural frequency of 590 Hz [20].Zhang designed a nanopositioning stage employing the self-damping moving magnet actuator (SMMA) for long-stroke operation, supported by flexure guides.This system delivers 20 nm resolution within a ±5 mm motion range and maintains tracking errors below 0.1% of trajectory amplitudes at 1 Hz sinusoidal and triangular commands [21].Yang designed a longstroke nanopositioning stage with annular flexure guides and the classical feed-forward PID (FFPID) controller, achieving a ±5 mm motion range, 20 nm resolution, and 20 nm positioning accuracy at the maximum output position [22]. To improve the performances of stages further, many novel mechanisms and methods have been proposed.Li proposed Compliant Building Elements (CBE) to create practical flexure layouts, allowing early-stage design flexibility by assembling CBE blocks like constructing with LEGO bricks [23].Niu introduced a corrugated dual-axial mechanism with structural symmetry, independent planar guidance for the two axes, stroke around 130 µm per axis, maximum cross-talk less than 2.5%, and an operating frequency of approximately 590 Hz [24].Panas combined cross-pivot flexures to boost stiffness, load capacity, and range capacity in nanopositioning systems [25].Al-jodah created a compact range three-degreesof-freedom (3-DOF) micro/nanopositioning mechanism with leaf springs and voice coil motors (VCMs) [26].Ling proposed an extended dynamic stiffness modeling approach for concurrent kinetostatic and dynamic analyses of planar flexure-hinge mechanisms with lumped compliance [27].Moreover, the novel control and sensing strategies were proposed.Omidbeike presented a new sensing method that separately measures linear and angular displacements in multi-axis monolithic nanopositioning stages, providing enhanced accuracy [28].Kuresangsai applied a linear time-varying (LTV) finite impulse response (FIR) prefilter to a flexure-based X-Y micro-positioning platform, significantly reducing settling times from over 6 s to just 0.4 s [29]. However, there are few stages that are suitable for direct application in the fields of wafer surface defect detection, Mini/MicroLED chip transfer packaging, etc., which need to balance large travel and high precision.Specifically, the three-branch chain stages have the advantage of ultra-high precision and are often used for nanoscale inspection but also have very small strokes, while the multibranch chain stages are suitable for processing large instruments.Therefore, there is a lack of a spatial pose adjustment system (SPAS) with hundred-micron motion stroke and hundred-nanometre accuracy to meet the needs of large-stroke, high-precision device fabrication and testing.Given accuracy as a priority, a three-branch parallel structure is advantageous for this type of stage.Therefore, the key to developing this posture adjustment system is to design a large-stroke and high-precision motion branch chain. To cater for this requirement, this paper designed a ZTTΘ(Z/Tip/Tilt/Θ) SPAS with 4 DoFs.To 8-inch wafers or MiniLED backlight board as the object of application, the diameter of the device size is designed for 200 mm.due to the smaller the table height-todiameter ratio, the higher the advantages of equipment integration and stability, this paper designs the device height of only 75 mm.Considering the need for high precision and large stroke the SPAS is designed based on a compliant structure with a simple and compact configuration, no motion friction [30,31].The stage employs piezoelectric ceramics as actuators.To overcome the small stroke issue of piezoelectric ceramics, this paper proposes a bridge-lever composite compliant amplification mechanism.The high-precision motion of the designed SPAS is achieved by designing a MIMO PID controller.After performance testing, the Z-direction feed microstroke is 327.37 µm, the yaw motion angle around the X and Y axes is 3.462 mrad, and the rotation motion angle around the Z axis is 12.684 mrad.The Z-direction positioning accuracy is ±100 nm, the X and Y axis yaw motion accuracy is ±2 µrad, and the Z-axis rotation accuracy is ±25 µrad.Such stroke size and accuracy performance proves that the design is feasible and advantageous, demonstrating the potential to provide large-stroke, high-precision displacements and operations in precision operation areas such as wafer inspection and MiniLED chip transfer packaging. The primary contribution of this work is the development of a compliant 4-DoFs system with hybrid amplification modules, which can achieve nanoscale positioning in 4-DoFs(Z/Tip/Tilt/Θ) to facilitate spatially accurate alignment.The rest of this paper is organized as follows: Section 2 presents the scheme design and operating principle of SPAS, which has been verified.Section 3 models the integrated ZTTΘ SPAS.Section 4 involves size optimization and finite element analysis.Validation experiments and performance evaluation are provided in Section 5. Finally, the achievements and further work are concluded in Section 6. Configurations and Working Principle 2.1. Structure of the ZTTΘ SPAS As shown in Figure 1a, SPAS is designed as a precision-guaranteing device, compounded on a large-stroke 3-DoFs(XYZ) macro motion stage.As shown in Figure 1b the ZTTΘ SPAS consists of a Θ rotation module, three identical flexible connection modules, and three identical piezoelectric drive amplification modules.The stage's operation is controlled by a combination of various motion branches, enabling it to achieve micro feed motion in the vertical direction, two spatial sway motions, and rotational micromotion around the vertical axis.The design configuration of the critical components of the SPAS is shown in Figure 2a,b.The stage mainly consists of the Θ rotation module and three parallel Z-axis motion branches.These branches are fixed in a 120 • apart circular shape around the central axis of the positioning stage, with the base fastened to the Θ rotation module.The Θ rotation module is connected to the flexible connection module, which in turn is connected to the piezoelectric drive amplification module.This module is fixed to the base via bolts.The Θ rotation module is driven by two piezoelectric ceramic actuators, while each piezoelectric drive amplifier module is driven by a single piezoelectric ceramic actuator. The ZTTΘ stage, presented in this paper, comprises 4-DoFs: a Z-axis feed micro motion, two spatial yaw motions (roll and pitch), and one spatial motion rotating in the vertical plane, with an overall diameter of 200 mm and a height of 68 mm. Design of ZTT Module This study employs a hybrid ZTT module integrating dual four-bar displacement guidance mechanisms and lever-bridge hybrid displacement amplification mechanisms.The piezoelectric amplification module of the SPAS is shown in Figure 3a.Although the design of the lever-type amplification mechanism is simple, its non-compact nature makes it prone to lateral parasitic displacement generation.Conversely, the bridge-type amplification mechanism exhibits a simple and compact structure, high displacement amplification factor, and good linear output displacement.However, it has low output stiffness and inadequate dynamic performance.The lever-bridge hybrid amplification mechanism, which combines the principles of lever amplification and bridge pressure rod instability, combines the benefits of a simple and compact structure, high displacement amplification factor, and good linear output displacement, while also improving the stiffness of the output end, thereby ensuring superior dynamic performance.To improve the decoupling ability and output stiffness of the mechanism, a double four-bar displacement guidance mechanism is integrated at the end of the lever-bridge hybrid displacement amplification module.This parallel four-bar displacement guidance mechanism, utilizing the elastic deformation of the straight beam flexure hinge, enables approximate linear motion in the vertical direction at the output end.However, due to unavoidable coupling in the horizontal direction, this significantly compromises the positioning accuracy of the mechanism.The double four-bar displacement guidance mechanism exhibits a symmetrical structure with fixed ends, effectively eliminating coupling errors and ensuring minimal horizontal displacement loss.As shown in Figure 3b, the red dashed lines represent the motion and deformation of the designed module. Design of Θ Module The Θ rotation module of the SPAS, as shown in Figure 4a, utilizes a two-stage lever displacement amplification mechanism in combination with a straight beam flexure hinge.By initially inputting the displacement or force in the horizontal direction, the two-stage lever amplifies and generates rotational motion around the center point at the output end.Compared to a single output implementation of the rotating mechanism, the utilization of two symmetrical parallel-connected two-stage lever displacement amplification mechanisms enhances the stability of the Θ rotation module, resulting in improved stiffness for the entire module.This approach improves both the horizontal stability and stiffness of the stage.The simultaneous application of an initial force or displacement to the output stage causes rotational motion around its center point.However, the design of the connecting rod leads to underconstrained DoFs in the non-rotational direction of the output stage, causing coupling errors.These errors include coupling along the X and Y axes in the horizontal plane, along the Z axis in the vertical plane, and rotational coupling around the X and Y axes in space.As shown in Figure 4b, the blue dashed lines represent the motion and deformation of the designed module. To mitigate the coupling error at the output end of the Θ rotation module and enhance the motion precision of the mechanism, a symmetric straight beam flexure hinge is incorporated at the terminal output stage.This approach not only eliminates the coupling error but also enhances the output stiffness and motion accuracy, thereby ensuring the overall dynamic performance of the mechanism.Consequently, the working bandwidth of the mechanism is increased. Static Modeling of ZTTΘ Stage The symmetrical design of the piezoelectric-driven amplification module allows for the analysis of its left half initially.The comprehensive coordinate system and precise parameters are shown in Figure 5. In the amplification module, the flexibility matrix C A i (where i = 1-6) represents the local flexibility matrix of point i in each A i element.When calculating the output compliance matrix, it can be deduced that Hinge 2 has no significant impact and can be disregarded at this stage.Therefore, the Flexure Hinges 1, 2, and 3 in the amplification module's branch chain A 1 A 3 A 4 belong to series connections, and the local coordinate systems of Hinges 1, 3, and 4 are connected to A i − xy in the coordinate system A 4 − xy. Converting the flexibility matrix of A i − xy(i = 1, 3, 4) to the coordinate system A 4 − xy, it can be concluded that where, S(l j i ) denotes the translation matrix, R j i denotes the rotation matrix [32].when l A i A j ,x and l A i A j ,y (i = 1-6, j = 1-6) denote the length of the X and Y components of the displacement vector A i A j , respectively.They are expressed as The series flexibility of Flexure Hinges 1, 3 and 4 in the coordinate system A 4 − xy of the branch chain The flexibility matrix of the branch chain A 1 A 3 A 4 in the global coordinate system A − xy is: By transforming the flexibility matrix of the remaining free end local coordinate system A i − xy (i = 5,6) of the flexure hinge into the global coordinate system A − xy , it can be obtained that The output flexibility matrix of the left half of the amplification module in the global coordinate system A − xy is influenced by the parallel connection between the dual four bar displacement guidance mechanism and the lever bridge hybrid displacement amplification mechanism in the amplification module: The piezoelectric-driven amplifier module exhibits a left-right symmetric structure.By counterclockwise rotation of the output flexibility matrix of the left half by 180 degrees, the output flexibility matrix of the right half can be obtained: Therefore, the output flexibility matrix for the piezoelectric drive amplification module, which utilizes a dual four-bar displacement guidance mechanism and a lever bridge hybrid displacement amplification mechanism within the global coordinate system A − xy, is presented as follows: The double four-bar displacement guidance mechanism on both sides of the output terminal stage of the amplification module is connected in parallel with the lever bridge hybrid displacement amplification mechanism.The stiffness model of the entire module is shown in Figure 5.The flexibility matrices of Flexure Hinge 1, 3, and 4 in the local coordinate system A i − xy (i = 1, 3, 4) are transformed into the coordinate system A 2 − xy as shown below: The series flexibility matrix of the branch chain A 3 A 4 and Flexure Hinges 3 and 4 in the coordinate system A 2 − xy is: The series flexibility matrix of the branch chain A 1 A 3 A 4 A 5 A 6 and Flexure Hinge 2 in the coordinate system A 2 − xy is The input flexibility matrix and the input stiffness of the piezoelectric drive amplification module are As shown in Figure 6, the motion diagram of the left half of the piezoelectric drive amplification module is shown.The displacement amplification ratio, defined as the ratio of the output displacement to the input displacement, is derived from the compliance matrix and displacement relationship.Mathematically, the amplification ratio may be expressed as As shown in Figure 7, the established coordinate system reveals the parameter variables of its straight beam flexure hinge and straight circular flexure hinge, which exhibit similarities to those of the piezoelectric-driven amplification module.Among these, C B i (i = 1-7 ) denotes the local flexibility matrix for each point within the amplification module.Connected in parallel, the branch chain B 1 B 2 B 3 B 4 B 5 B 6 B 7 and straight beam flexure hinge form the upper half of the Θ rotation module in the global coordinate system B − xy.This yields the output flexibility matrix for the upper half of the module in the global coordinate system B − xy: Due to the symmetrical structure of the Θ rotation module, the output flexibility matrix of the upper half can be rotated 180 degrees counterclockwise to obtain the output flexibility matrix of the lower half: The output flexibility matrix of the Θ rotation module in the global coordinate system B − xy is The straight beam-type flexure hinges on both sides of the output terminal stage of the Θ rotation module are connected in parallel with the two-stage lever-type displacement amplification mechanism.The series flexibility matrix of the branch chain and Flexure Hinge 1 in the coordinate system B I − xy is The input stiffness of the Θ rotation module is As shown in Figure 8a, the dimensions of the two-stage lever displacement amplification mechanism within the Θ rotation module are presented.This mechanism exhibits a pronounced amplification effect, resulting in a substantial output displacement at the terminal stage.As shown in Figure 8b, the simplified analysis diagram of a two-stage levertype displacement amplification mechanism reveals the straight circular flexure hinge at the fixed end as a rotating pivot.Its displacement is represented by a linear spring moving perpendicular to the rigid link and a rotating spring rotating around the pivot.During the actual calculation and analysis process, the deformation of the straight circular flexure hinge is calculated using the flexibility coefficient matrix.Subsequently, the displacement amplification ratio of the mechanism is obtained: Dynamic Modeling of ZTTΘ Stage This paper uses the Lagrange equation method to establish the kinematic differential equations of the ZTT module (marked A) and Θ module (marked B), respectively.Establish the dynamic model of the left half of the piezoelectric drive amplification module shown in Figure 9a, and the dynamic model of the upper part of the Theta rotation module shown in Figure 9b.Among them, by simplifying the straight circular flexure hinge and the straight beam flexure hinge into rotating torsion springs, the system is simplified as a spring-mass system.The kinetic energy of the system is generated by the movement of the rigid links in the mechanism, and the potential energy is generated by the elastic deformation of the straight circular flexure hinges and straight beam flexure hinges in the mechanism.The input displacement coordinate of the system is expressed as (q 1 , q 2 ), q 1 is the displacement in the x-axis direction, q 2 is the displacement in the y-axis direction, and m k is the mass. The potential energy of the system is the sum of the elastic potential energy generated by flexure hinges, which can be expressed as The kinetic energy of the system can be expressed as The kinetic energy and potential energy of the system can be substituted into the Lagrange equation: In the formula, L = T − U. Obtain the undamped free vibration frequency of the piezoelectric drive amplification module: Multi-Objective Optimization According to statics and dynamics models, the main dimensional parameters of the piezoelectric-driven amplification module are the width b of the straight beam, the dimensions of the straight-circular flexible hinge (r c , t c ), the dimensions of the straightbeam flexible hinge (l s , t s ), and the dimensions of the lever-bridge hybrid displacement amplification mechanism (l AB , l AC , l OC , l OD ), as shown in Figure 5. Therefore, to design the performance parameters of the SPAS, essentially determining each of the key parameters, this can be viewed as a multi-objective optimization problem where the optimization objective is the displacement of the output, and therefore the objective function can be set to the amplification ratio as shown in Equations ( 28) and ( 29).The main module dimensions of SPAS are optimized using a Genetic Algorithm. (1) ZTT module: Limited by the dimensions of the piezoelectric ceramics, the overall thickness b a is set at 18 mm.Meanwhile, to ensure the Z-axis stiffness of the piezoelectric drive amplification module, t a2 is set to 0.8 mm.The remaining seven dimensional parameters (r a , t a1 , l a , l AB , l AC , l OC , l OD ) are used as design variables. where u out is the output displacement, K A I is the input stiffness of the amplifier module, and R A is the amplification ratio of the module, which are derived from the kinematic modeling in the Section 3. K p1 and r p1,0 are the stiffness and nominal stroke of the selected piezoelectric ceramic PSt150/10/60 VS15, respectively.Taking into account the module's overall size, height, and wire cutting gap, the range of design variables and the obtained optimal solution are given in Table 1.The displacement amplification ratio is 7.92, and the maximum displacement of the theoretical output is 323.29 µm.(2) Θ module: As shown in Figure 7, taking into account the dimensions of the piezoelectric ceramics, as well as the compactness of the overall dimensions, the partial parameters are determined: b b = 12 mm, l b = 7.8 mm, l b5 = 7.5 mm, t b2 = 11 mm, t b3 = 8 mm, and t b5 = 12 mm.The remaining seven dimensional parameters (r b , t b1 , l b1 , l b2 , l b3 , l b4 ) are used as design variables.Similarly, to achieve a larger output rotation angle, the displacement amplification ratio of the Θ rotation module from kinematic modeling in Section 3 is selected as the optimization objective: The size parameters of the six design variables and the obtained optimal solution are given in Table 2.And the amplification ratio is 3.24. Simulation The model of SPAS is established by the 3D design software UG NX10.0.The static and dynamic characteristic are analyzed by the finite element analysis software ANSYS Workbench 19.0. In FEA, the fineness of meshing affects the accuracy of the simulation, and the key to the performance of the flexible mechanism SPAS is the flexible hinge, in order to improve the accuracy of the simulation, the mesh size of the flexible hinge is set to 0.5 mm, and in order to improve the speed and stability, the mesh size of the rigid body is set to 1.5 mm.The material used in this study is AL7075-T6 (elasticity modulus E = 71.7 GPa, Poisson's ratio µ = 0.33 , density ρ = 2810 kg/m 3 , Yield strength σ = 503 MPa).The FEA results of the piezoelectric driven amplifier module and the Θ rotation are shown in Table 3 and Table 4, respectively. As shown in Figure 10a,b, the maximum output displacement of the piezoelectric driven amplifier module is 303.22 µm, the ratio is 7.43, and the maximum bending stress is 176.94MPa.As shown in Figure 10c,d, the maximum output rotation angle of the Θ rotation module is 12.1789 mrad, the ratio is 2.98, and its maximum bending stress is 200.72 MPa, which is far less than the allowable limit stress of material AL7075-T6 of 503 MPa.These results indicate that the designed modules will not undergo fatigue failure within the normal working range, thereby fulfilling the design specifications.As shown in Figure 11a, when the identical displacement excitation is applied to the three Z-axis motion branches, the terminal motion stage generates an overall Z-axis feed motion, with the simulated output Z-axis displacement reaching 303.22 µm.In Figure 11b,c, only one of the Z-direction motion branches is subjected to displacement excitation, while the remaining two Z-direction motion branches are in a free state.Consequently, the terminal motion stage produces yaw motion around the X and Y axes, with the simulation value of the output swing angle being 3.196 mrad.In Figure 11d, applying symmetrical displacement excitation to the Θ rotation module results in rotational motion around the Zaxis, with the simulated output rotation angle value reaching 12.179 mrad.The simulation coupled errors of the SPAS are shown in Table 5, the results indicated that because of the symmetrical mechanism design, the coupled errors are very small.As shown in Figure 12a, the initial natural frequency of the module is 334.5 Hz, indicating its motion state during routine operation.In Figure 12b, the Θ rotation module's initial natural frequency is 873.39Hz, signifying its motion state under routine operation.The first mode deformation in both figures exhibits yaw motion, with the first mode in Figure 12c revolving around the Y-axis and the second mode in Figure 12d revolving around the X-axis.Their respective natural frequencies are 84.265Hz and 89.105 Hz.The results show that the designed device has good dynamic response performance. Experimental Setup As shown in Figure 13a, a prototype 4-DoFs SPAS is fabricated.The system consists of an AL7075-T6 material-based SPAS and is driven by three piezoelectric ceramic actuators (XMT,PSt150/10/60 VS15) for ZTT module and two for Θ module.The measurement system comprises three capacitive displacement sensors (SYMC, NS-DCS14), deployed above the three Z-direction motion branches, to detect the SPAS's motion position.The system is mounted on the Winner Optics (WN01AL) stage to mitigate external vibrations caused by environmental factors. The decoupling of multiple-input-multiple-output (MIMO) control in the system is accomplished using inverse kinematics and a closed-loop PID control strategy.The control strategy of the 4-DoFs SPAS is shown in Figure 13c.The input of the entire control system comprises the reference Z-direction feed micromotion, yaw motion around the X-axis and Y-axis, and rotational motion around the Z-axis.After inverse kinematics analysis, the corresponding driving voltage signals d 1 (t) and d 2 (t) are output through the closed-loop controller, thereby driving the ZTTΘ.The SPAS utilizes a capacitive displacement sensor measurement system to detect the actual displacement signal of the stage in real-time, and compare it with the reference signals r 1 (t) and r 2 (t).Subtracting yields an error value e 1 (t) and e 2 (t).The four axes are adjusted by their respective PID controllers to achieve closed-loop motion control of the four axes. Open-Loop Test As shown in Figure 14, the first-order natural frequency obtained by sinusoidal frequency sweeping of the amplification module is 396 Hz, which is similar to the result of 334.5 Hz obtained by simulation.The ZTTΘ workspace encompasses the Z-direction feed micro motion stroke, the yaw motion stroke, and the rotation motion stroke.As shown in Figure 15a, the step voltage signals (0-10 V) applied to the three piezoelectric drive amplification modules are amplified by a piezoelectric ceramic voltage driver by a factor of 15.The resulting output terminal displacements are measured by capacitive displacement sensors. In Figure 15b-d, we can see that the output displacement stroke of Module 1 is 328.5 µm, Module 2 is 326 µm, and Module 3 is 327.6 µm.It is clear that the output displacement of the three modules is almost identical.By using the displacement transformation relationship, we can calculate that the actual Z-direction feed microstroke of the ZTTΘ SPAS is 327.37 µm.Additionally, the actual output yaw angle is 3.462 mrad. Closed-Loop Test The closed-loop motion tracking experiment of the SPAS is designed to assess its performance.The control model is established using MATLAB 2022b/Simulink and compiled into dSPACE for calculation.The basic sampling time is 0.1 µs, and the PID controller parameters are specified as shown in Table 6.To evaluate the stage's performance, tracking control experiments are conducted on conventional Z-direction motion signals (Figure 16a).The reference signal is shown in Table 7, lasting 2 s.The maximum steady-state error after each step is stabilized within 100 nm, resulting in a closed-loop positioning accuracy of 100 nm for the step signal.As shown in Figure 16b, a tracking test is performed on the Z-direction closed-loop sine signal of the positioning stage.The reference signal is shown in Table 7.The maximum steady-state error of the trajectory tracking is kept within ±100 nm, thus ensuring uniform ZTTΘ positioning accuracy of ±100 nm for the Z-direction feed micro motion of the SPAS.Similarly, as shown in Figure 16c, a tracking test is conducted on the closed-loop sine signal of the positioning stage's yaw motion.The reference signal is shown in Table 7.The maximum steady-state error of the trajectory tracking is kept within ±2 µrad, thus allowing for the determination of the closed-loop positioning accuracy of the SPAS's yaw motion around the X-axis and Y-axis to be ±2 µrad.Lastly, as shown in Figure 16d, a tracking test is performed on the closed-loop sine signal of the positioning stage's rotational motion.The reference signal is shown in Table 7.The maximum steady-state error of the trajectory tracking is kept within ±25 µrad, ensuring uniform ZTTΘ positioning accuracy of ±25 µrad for the SPAS rotating around the Z-axis. Performance Evaluation and Discussions In high-precision operations and manufacturing, such as MicroLED chips, accurate alignment of spatial position during transfer, packaging, inspection, and repair processes is crucial for ensuring yield.To address this requirement, this paper presents a highprecision 4-DoFs (ZTTΘ) SPAS.An experimental evaluation is conducted on the system, demonstrating precise positioning of four degrees of freedom in space, as evidenced by the performance parameter test results reported in Table 8.These results were obtained through a series of trajectory tracking test experiments, demonstrating that ZTTΘ SPAS can effectively adjust pose in large stroke, high-precision, and MDOF space, including yaws, lifting, rotation, and more.This capability makes ZTTΘ SPAS suitable for use in Micro LED display equipment manufacturing and testing.Furthermore, topology and module optimization theories [33] can be employed to optimize the device's strength and quality, thereby enhancing its dynamic performance.This work contributes to the advancement of high-precision operation and manufacturing, which underscores the importance of precise alignment during various processes. Conclusions This paper presents the design, analysis, and prototype development of a 4-DoFs SPAS.The main achievements are summarized as follows: (1) Designed, optimized, and manufactured a ZTT module that combines a bridge-type lever hybrid amplification with a dual four-bar guide mechanism, and a multi-stage lever rotation Θ module.Compared with the traditional piezoelectric drive mechanism, it exhibits low coupling, high stiffness, and large stroke.The experimental results indicate that the Z-direction stroke is 327.37 µm, the tip-tilt value is 3.462 mrad, and the yaw value around the Z axis is 12.684 mrad.The closed-loop positioning accuracy is ±100 nm, the tip−tilt is ±2 µrad, the yaw around the Z-axis is ±25 µrad.In conclusion, the relevant theoretical derivations and experimental results fully demonstrate that the developed system can well meet the advanced submicron and nanoscale manufacturing requirements. Figure 1 . Figure 1.Design scheme of the ZTTΘ SPAS.(a) SPAS composite macro stage alignment system.(b) Working principle diagram. Figure 2 . Figure 2. Structural design of the ZTTΘ SPAS based on flexure.(a) Overall schematic diagram.(b) Assembly diagram of each module. Figure 3 . Figure 3. Design of piezoelectric drive amplifier module.(a) Piezoelectric drive amplifier module.(b) Principle of piezoelectric drive amplifier module. Figure 4 . Figure 4. Design of Θ rotation module.(a) Oblique second survey view of Θ rotation module.(b) Principle of piezoelectric drive amplifier module. Figure 5 . Figure 5.The local coordinate system of piezoelectric drive amplification module. Figure 7 . Figure 7.The local coordinate system of piezoelectric drive amplification module. Figure 10 . Figure 10.FEA results.(a) Maximum output displacement of the piezoelectric-driven amplifier module.(b) Maximum bending stress of the amplifier module.(c) Maximum output rotation angle of the Θ rotation module.(d) Maximum bending stress of Θ rotation module. Figure 11 . Figure 11.FEA results.(a) Maximum output displacement of Z-axis.(b) Maximum output yaw angle around the X axis.(c) Maximum output yaw angle around the Y axis.(d) Maximum output rotational angle around the Z axis. Figure 12 . Figure 12.FEA results.(a) Natural frequency of the amplifier module.(b) Natural frequency of the Θ rotation module.(c) First frequency of the ZTTΘ stage.(d) Second frequency of the ZTTΘ stage. Figure 14 . Figure 14.Sweep signal experiment of piezoelectric driven amplifier module. Figure 15 . Figure 15.Open-loop step signal test diagram of piezoelectric drive amplifier module.(a) Input step voltage signal of piezoelectric actuator.(b) Output signal of Z1 module displacement.(c) Output signal of Z2 module displacement.(d) Output signal of Z3 module displacement. Figure 16 . Figure 16.Closed-loop test diagram of the positioning stage.(a) Trace diagram of the ladder signal of the Z-direction motion.(b) Trace diagram of the sinusoidal signal of the Z-direction motion.(c) Trace diagram of the sinusoidal signal of the yaw motion.(d) Trace diagram of the sinusoidal signal of the Rotational motion. ( 2 ) Designed and built a compliant 4-DoFs SPAS driven by piezoelectric ceramics, which can realize four high-precision movements of Z-axis lifting, Tip, Tilt, and rotation.(3) With the developed system and controlling strategy, the experiments including open-loop test, closed-loop performance test, accuracy and travel range tests are successfully carried out in detail. Table 1 . The value of optimization variable of the piezoelectric driven amplifier module. Table 2 . The value of optimization variable of the Θ rotation module. Table 3 . The FEA results of the piezoelectric driven amplifier module. Table 4 . The FEA results of the Θ rotation. Table 5 . Deformation and coupled error. Table 8 . Performance parameters test results of the ZTTΘ SPAS.
7,775.6
2024-04-30T00:00:00.000
[ "Engineering" ]
Predicting protein-ligand binding residues with deep convolutional neural networks Background Ligand-binding proteins play key roles in many biological processes. Identification of protein-ligand binding residues is important in understanding the biological functions of proteins. Existing computational methods can be roughly categorized as sequence-based or 3D-structure-based methods. All these methods are based on traditional machine learning. In a series of binding residue prediction tasks, 3D-structure-based methods are widely superior to sequence-based methods. However, due to the great number of proteins with known amino acid sequences, sequence-based methods have considerable room for improvement with the development of deep learning. Therefore, prediction of protein-ligand binding residues with deep learning requires study. Results In this study, we propose a new sequence-based approach called DeepCSeqSite for ab initio protein-ligand binding residue prediction. DeepCSeqSite includes a standard edition and an enhanced edition. The classifier of DeepCSeqSite is based on a deep convolutional neural network. Several convolutional layers are stacked on top of each other to extract hierarchical features. The size of the effective context scope is expanded as the number of convolutional layers increases. The long-distance dependencies between residues can be captured by the large effective context scope, and stacking several layers enables the maximum length of dependencies to be precisely controlled. The extracted features are ultimately combined through one-by-one convolution kernels and softmax to predict whether the residues are binding residues. The state-of-the-art ligand-binding method COACH and some of its submethods are selected as baselines. The methods are tested on a set of 151 nonredundant proteins and three extended test sets. Experiments show that the improvement of the Matthews correlation coefficient (MCC) is no less than 0.05. In addition, a training data augmentation method that slightly improves the performance is discussed in this study. Conclusions Without using any templates that include 3D-structure data, DeepCSeqSite significantlyoutperforms existing sequence-based and 3D-structure-based methods, including COACH. Augmentation of the training sets slightly improves the performance. The model, code and datasets are available at https://github.com/yfCuiFaith/DeepCSeqSite. Background Benefiting from the development of massive signature sequencing, protein sequencing is becoming faster and less expensive. By contrast, owing to the technical difficulties and high cost of experimental determination, the structural details of only small parts of proteins are known in terms of protein-ligand interaction. Both biological and therapeutic studies require accurate computational methods for predicting protein-ligand binding residues [1]. The primary structure of a protein directly determines the tertiary structure, and the binding residues of proteins are closely bound with the tertiary structure. These properties of proteins ensure the feasibility of predicting binding residues from amino acid sequences (primary structures) or 3D structures. However, the complex relationship between binding residues and structures is not completely clear. Thus, we have motivation for using machine learning in binding residue prediction, which is based on the unknown complex mappings from structures to binding residues. The existing methods for computational prediction of protein-ligand binding residues can be roughly categorized as sequence-based [2][3][4][5] or 3D-structure-based methods [1,[6][7][8][9][10][11]. The fundamental difference between the two types of methods is whether 3D-structure data are used. Some consensus approaches comprehensively consider the results of several methods. These methods can be seen as 3D-structure-based methods if any submethod uses 3D-structure data. Up to now, 3D-structurebased methods have been shown to be widely superior to sequence-based methods in a series of binding residue prediction tasks [1,11]. However, 3D-structure-based methods depend on a large number of 3D-structure templates for matching. The time cost of template matching for a protein can reach several hours in a distributed environment. Furthermore, the number of proteins with known amino acid sequence is three orders of magnitude higher than that of proteins with known 3D structures. The enormous disparity in these quantities leads to difficulties in effectively utilizing 3D-structure information and massive sequence information together, which limits further progress in binding residue prediction. A series of traditional machine learning methods have been used in binding residue prediction. Many computational methods based on support vector machines (SVM) have been proposed for specific types of binding residue prediction [12][13][14][15]. A traditional BP neural network has been used in protein-metal binding residue prediction, but the network has considerable room for improvement [16]. Differing in interpretability from the mentioned methods, a robust method based on a Bayesian classifier has been developed for zinc-binding residue prediction [17]. Many methods based on template matching achieve considerable success at the expense of massive computational complexity [1,10,11]. A representative consensus approach, COACH, combines the prediction results of TM-SITE, S-SITE, COFACTOR, FINDSITE and ConCavity, some of which are 3D-structure-based methods [1,6,7,10,11]. This robust approach to protein-ligand binding residue recognition substantially improves the Matthews correlation coefficient (MCC). These methods have achieved successful results on small datasets. However, the methods would achieve even higher accuracy if massive data could be further utilized. One crucial factor for the available utilization of massive data is the representation capability of classifiers, which has a dominant impact on generalization. Deep neural networks have achieved a series of breakthroughs in image classification, natural language processing and many other fields [18][19][20][21]. In bioinformatics, deep neural networks have been applied in many tasks, including RNA-protein binding residue prediction, protein secondary structure prediction, compound-protein interaction prediction and protein contact map prediction [22][23][24][25]. Various recurrent networks are commonly used in sequence modeling [26,27]. Context dependencies universally existing in sequences can be captured effectively by recurrent networks, and these networks are naturally suitable for variable-length sequences. Nevertheless, recurrent networks depend on the computations of the previous time step, which blocks parallel computing within a sequence. To solve this problem, convolutional neural networks are introduced into neural machine translation (NMT) [28,29]. These architectures are called temporal convolution networks (TCN). In contrast to recurrent networks, the computation within a convolutional layer does not depend on the computation of the previous time step, so the calculation of each part is independent and can be parallelized. Convolutional sequence-to-sequence models outperform mature recurrent models on very large benchmark datasets by an order of magnitude in terms of speed and have achieved the state-of-the-art results on several public benchmark datasets [29]. Many similarities exist between NMT and binding residue prediction. The performance of binding residue prediction can be improved with progress in NMT. In this study, we propose a new approach, DeepCSe-qSite (DCS-SI), for protein-ligand binding residue prediction. The architecture of DCS-SI is inspired by a series of sequence-to-sequence models including ConvS2SNet [29]. DCS-SI includes two editions: stdDCS-SI and enDCS-SI. The encoders of the two editions are the same. The decoder of enDCS-SI evolves from the decoder of stdDCS-SI. The former executes forward propagation twice and takes the previous output into consideration to produce more accurate predictions. In DCS-SI, the fully convolutional architecture contributes to improving parallelism and processing variable-length inputs. Several convolutional layers are stacked on top of each other to extract hierarchical features. The low-level features reflect local information over residues near the target while the high-level features reflect global information over a long range of an amino acid sequence. Correspondingly, the size of the effective context scope is expanded as the number of layers increases. The long-distance dependencies between the residues can be captured by an effective context scope that is sufficiently large. A simple gating mechanism is adopted to select relevant residues. Templates are not used in DCS-SI. The network in DCS-SI is trained only on sequence information. The state-ofthe-art ligand-binding method COACH and some of its submethods are selected as baselines. Experiments show that stdDCS-SI and enDCS-SI significantly outperform the baselines. Datasets The datasets used in this study are collected from the BioLip database and the previous benchmarks [1,11]. Our training sets contain binding residues of fourteen ligands (ADP, ATP, Ca 2+ , Fe 3+ , FMN, GDP, HEM, Mg 2+ , Mn 2+ , Na + , NAD, PO 3− 4 , SO 2− 4 , Zn 2+ ) 1 . A total of 151 proteins are selected from the previous benchmarks with the fourteen ligands as the benchmark testing set, called SITA. Every protein in the training sets has a sequence identity to the proteins in the validation sets and testing sets of less than 40% [13]. To obtain as much data as possible for training, the pairwise sequence identity is allowed to be 100% in the training sets. We speculate that the augmented training sets (Aug-Train) can drive networks to achieve better generalization performance. Considerable data skew generally exists in proteinligand binding residue prediction. ADP, ATP, FMN, GDP, HEM and NAD have more binding residues than do metal ions and acid radical ions, which means that the substantial data skew is attributed more to metal ions and acid radical ions. The computational binding residue prediction of metal ions and acid radical ions is still difficult because of the small size and high versatility. To demonstrate the ability of the models to predict the binding residues of metal ions and acid radical ions, we extend SITA with metal ions and acid radical ions. Every protein in the testing sets has a sequence identity to the proteins in the training sets and the other testing sets of less than 40%. Furthermore, the extended testing sets (SITA-EX1, SITA-EX2 and SITA-EX3) reduce the variance in the tests. A summary of the datasets used in this study is shown in Table 1. Severe data skew exists in the datasets, which restricts the optimization and performance of many Method motivation Each residue in an amino acid sequence plays a specific role in the structure and function of a protein. For a target residue, nearby residues in the tertiary structure plausibly affect whether the target residue is a binding residue for some ligand. Thus, residues near the target residue in the tertiary structure but far from the target residue in the primary structure are critical to binding residue prediction. Most of the existing methods use a sliding window centered at the target residue to generate overlapping segments for every target protein sequence [13,16,30]. The use of sliding windows is a key point in converting several variable-length inputs into segments of equal length. However, even if the distance in the sequence between two residues is very long, their spatial distance can be limited because of protein folding. Thus, residues far from the target residue in the sequence may also have an important impact on the location of the binding residues. To obtain more information, these methods have to increase the window size in the data preprocessing stage. The cost of computation and memory for segmentation is not acceptable when the window size increases to a certain width. On the basis of the inspiration from NMT, proteinligand binding residue prediction can be seen as a particular form of translation. The main differences are the following two aspects: 1. For NMT, the elements in the destination sequences are peer entities to the elements in the source sequences, but the binding site labels are not peer entities to the residues. 2. While the destination and source sequences typically differ in length for NMT, a oneto-one match between each binding residue label and each residue exists. Despite the differences, binding residue prediction can learn from NMT. The foundation of feature extraction in NMT includes local correlation and longdistance dependency, which are common in amino acid sequences and natural language sentences. Thus, the main idea of feature extraction in NMT is applicable to binding residue prediction. Method outline In the training sets, the binding residues that belong to any selected ligand type are labeled as positive samples, and the rest are labeled as negative samples. A deep convolutional neural network is trained as the classifier of stdDCS-SI or enDCS-SI, whose inputs are entire amino acid sequences. The input sequences are allowed to differ in length. The sequences are divided into several batches during training. In each batch, the sequences are padded to the length of the longest sequence in the batch with dummy residues. Batches are allowed to differ in length after padding. Each protein residue is embedded in a feature space consisting of several features to construct the input feature map for the classifier. For a given protein, every residue is predicted to be a binding residue or non-binding residue in the range of the selected ligand types simultaneously. The representation of dummy residues is removed immediately before the softmax layer. The method outline is shown in Fig. 1. The details of the method are described in "Architecture" section. Features Seven types of features are used for the protein-ligand binding residue prediction: position-specific score matrix (PSSM), relative solvent accessibility (RSA), secondary structure (SS), dihedral angle (DA), conservation scores (CS), residue type (RT) and position embeddings (PE). PSSM PSSM is the probability of mutating to each type of amino acid at each position. Therefore, PSSM can be interpreted as representing conservation information. Normalized PSSM scores can be calculated as follows: where x is the dimension of the PSSM score and y is the corresponding PSSM feature. For a protein with L residues, the PSSM feature dimension is L * 20. Relative solvent accessibility The RSA is predicted by SOLVE. The real value of RSA is generally converted to a Boolean value indicating whether the residue is buried (RSA <25%) or exposed (RSA >25%). However, the original value is retained so that the network in DCS-SI can learn more abundant features [31]. Secondary structure The secondary structure is predicted by PSSpred. The secondary structure type (alpha-helix, beta-strand and coil) is represented by a real 3D value. Each dimension of the real 3D value is in the range of [0, 1] indicating the possibility of existence of the corresponding type [32]. Dihedral angle A real 2D value specifying the φ/ψ dihedral angles is predicted by ANGLOR [33]. The values of φ and ψ are normalized by Norm(x) = x/360.0. Conservation scores Conservation analysis is a widely used method for detecting ligand-binding residues [34,35]. Ligand-binding residues tend to be conserved in evolution because of their functional importance [2]. The relative entropy (RE) and Jensen-Shannon divergence (JSD) scores of conservation are taken as features in this study. Residue type Some amino acids have a much higher binding frequency for the corresponding ligands than do other amino acids. Twenty amino acid residues and an additional dummy residue are numbered from 0 to 20. Then, the numbers representing residue type are restricted to the range of [0, 1] by dividing by the total number of the types. Position embeddings Position embeddings can carry information about the relative or absolute position of the tokens in a sequence [36]. Several methods have been proposed for position embeddings. Experiments with ConvS2SNet and Transformer show that position embeddings can slightly improve performance, but the difference among several position embedding methods is not clear [29,36]. Therefore, a simple method for position embeddings is adopted in DCS-SI. The absolute positions of the residues are represented as PE i = i/L, where PE i of the i-th residue is limited to range [0, 1], and L is the length of the amino acid sequence. Architecture The effective context scope for the prediction result or hidden layer representation of a target residue is called the input field. The size of the input field is determined by the stacked convolutional layers instead of being explicitly specified. Stacking n convolutional layers with kernel width k and stride = 1 results in an input field of 1 + n(k − 1) elements (including padded elements). The input field can easily be enlarged by stacking more layers, which enables the maximum length of the dependencies to be precisely controlled. The stacked convolutional layers have the ability to process variable-length input without segmentation, which significantly reduces the additional cost. Moreover, deeper networks can be constructed with the slow growth of parameters. However, many proteins have hundreds or even thousands of residues; thus, deep stacked convolutional layers or a very large kernel width is required for long-distance dependencies. The latter is unadvisable because padded elements in the input fields and the growth rate of parameters are incremental over kernel width. By contrast, going deeper enables the method to achieve the desired results. stdDCS-SI The architecture of the deep convolutional neural network is shown in Fig. 2. The input to the network consists of m residues embedded in d dimensions. Due to the local correlation among the representations of adjacent residues, 1D convolution along the sequence is applied to the initial Each m × 1 cell grid represents the output of a convolution kernel. The right-most representation is the input for the softmax feature map and the hidden feature maps. The local correlation is based on the interaction among nearby residues and the covalent bond between adjacent residues. For the encoder network, each residue always has a representation during forward propagation. A group of k × d convolution kernels transforms the initial m × d feature map into m × 1 × 2c, where 2c is the output channel number of the convolution kernels. Zero elements are padded at both sides of the initial feature map to maintain m. The transformation and padding aim to satisfy the input demands of the following layers and the feature extraction. The main process of the network can be separated into two stages. Each stage contains N BasicBlocks (described in "BasicBlock" section) that consist of multiple frequently used layers and are designed for cohesiveness and expandability. In each stage, blocks are stacked on top of each other to learn hierarchical features from the input of the bottom block. At the tops of each stage, additional layers are added to stabilize the gradients and normalize the outputs. For the decoder network, the representation of each residue is transformed into the distribution over possible labels. Following the two stages, two fully connected layers consisting of one-by-one (1 × 1) convolution kernels are used for information interaction between channels. The numbers of output channels of these 1 × 1 convolution kernels are set to c and 2. The number of elements represented by the output of each block or layer is the same as the number of initial input elements. The first fully connected layer is wrapped in dropout to prevent overfitting [37]. The output of the last fully connected layer is fed to a 2-way softmax classifier, which produces the distribution over the labels of positive and negative samples. The cross entropy between the training data distribution and the model distribution is used in the following cost function: where θ represents the weights in DCS-SI, x (1) , · · · , x (t) is a set of t samples, y (1) , · · · , y (t) is a set of corresponding labels y (i) ∈ {0, 1} and γ is the coefficient of the L2 normalization θ 2 2 . enDCS-SI We proposed enDCS-SI on the basis of stdDCS-SI. Note that the prediction of the other residues is called the context prediction. Although stdDCS-SI outperforms existing methods, the performance can be further improved if the context prediction is taken into consideration explicitly. To achieve this goal, we retained the encoder network and modified the decoder network. In addition to the output of the encoder network, the new decoder network receives the context prediction as input. A group of k × 2 convolution kernels transforms the context prediction into m × 1 × 2c, where 2c is the number of output channels of the convolution kernels. The following process consists of two parallel stages with M blocks and additional layers (in this study, we use M = 2). To extract the features from the left (right) context prediction, we remove 1 element from the end (start) of the context prediction. Then, the input of each convolutional layer is padded by k elements on the left (right) side. The extracted information of the left and right adjacent predictions is directly added to the output of the encoder, where the three tensors have the same shape. ConvS2SNet directly uses the labels as the context prediction during training. Therefore, the forward propagation in training operates in parallel with the sequence. However, no label exists for the input samples during testing. Thus, the prediction for each element is processed serially to generate the context prediction for the next element. To overcome the serialization in testing, we let enDCS-SI execute forward propagation in the decoder network 2 times. The first forward propagation is similar to that of stdDCS-SI, but the context prediction for enDCS-SI is fed by a zero tensor. The output of the first forward propagation is used as the context prediction for enDCS-SI in the second forward propagation. While training enDCS-SI, the context prediction is also replaced with the labels. All the weights in stdDCS-SI are loaded for enDCS-SI. The rest of the weights of enDCS-SI are initialized. The weights of the encoder network are fixed because the encoding processes of stdDCS-SI and enDCS-SI are the same. The architecture of enDCS-SI is described in Fig. 3. BasicBlock The input of BasicBlock is processed in the order LN-GLU-Conv. The output of the l-th block is designated as s l = s 1 , ..., s m ∈ R m×1×2c , where m is the length of the input sequences 2 and c is the number of input channels of convolutional layer in each block. The output of the l−1-th block is input to the l-th block. The input of each k × 1 convolution kernel is an m × 1 × c feature map consisting of m input elements mapped to c channels. Before convolution, both ends of each channel are zero-padded with k/2 elements to maintain the height of the feature map, where the height is m. A convolutional layer with 2c output channels transforms the input of convolution X ∈ R m×1×c into the output of convolution Y ∈ R m×1×2c to satisfy the input requirement of the gated linear units (GLU) of the next possible block and to make the input size and output size of the block consistent [38]. Y corresponds to [ A B] ∈ R m×1×2c , where A, B ∈ R m×1×c are the inputs to the GLU. A simple gating mechanism over [ A B] is implemented as follows: where σ represents the sigmoid function. The output of GLU g([ A B]) ∈ R m×1×c is one-half the size of Y and is the same as the input size of the convolution in BasicBlock. GLU can select the relevant context for the target residue by the means of activated gating unit σ (B). The gradient of GLU has a path that without downscaling contributes to the flow of the gradient, which is an important reason for the choice of the activation function. The vanishing gradient problem is considered before going deeper. Hence, residual connections from the input of the block to the output of the block are introduced to prevent the vanishing gradient [20]. The input of a block must be normalized before convolution because the input is the sum of the outputs of the several previous blocks. Without normalization, gradients are unexpected during training. Therefore, a LayerNormalization (LN) layer is set at the beginning of the block to provide a stable gradient, which is also conductive to accelerating the learning speed [39]. The function of BasicBlock is summarized in Eq.(4): where W l represents the weights of convolution in the lth block, s l i is the features of the i-th element represented in the l-th block, k is the width of the convolution kernels and subscript LN means that s l−1 i−k/2 , ..., s l−1 i+k/2 has been normalized by LN. The details are described in Fig. 4. Evaluation The main evaluation metrics for binding residue prediction results include the Matthews correlation coefficient where TP is the number of binding residues predicted correctly, FP is the number of non-binding residues predicted as binding residues, TN is the number of non-binding residues predicted correctly and FN is the number of binding residues predicted as non-binding residues. Optimization For the hyperparameter choice, we focus on the number of BasicBlocks N and the kernel width k in the BasicBlocks. N and k both have a decisive effect on the parameter space and the maximum length of the dependencies. Thus, N and k are closely related to the generalization and are separately adjusted to obtain the local optimum. When adjusting N, the kernel size of each BasicBlock is fixed to 3 × 1 (k = 3). When adjusting k, N is fixed to 10. The output channel number of each BasicBlock is set to 512 (c = 256) in this study. Experiments show that the network achieves the locally optimal generalization on the validation sets when N = 10 and k = 5 3 . The details are shown in Tables 2 and 3. Experiments indicate that DCS-SI can be optimized effectively on the training sets and achieve good generalization on the test sets without any sampling. Minibatches are prone to contain only negative samples if the samples are grouped via inappropriate methods. This problem is unlikely to occur in our mini-batches because an amino acid sequence is treated as a unit during our grouping. The severe data skew can be overcome as long as the proportion of positive samples in every mini-batch is close to the actual level. The cost function is minimized through mini-batch gradient descent. With zero-padding, the feature maps of the proteins in a batch are filled to the same size to simplify the programming implementation. The coefficient γ of the L2-Norm is 0.2, and the dropout Table 2 The effect of depth on the validation sets ratio is set to 0.5. All DCS-SI models are implemented with TensorFlow. The training process consists of three learning strategies to suit different training stages. The learning rate of each stage decreases exponentially after the specified number of iterations. The gradient may be very steep in the early stage because of the unpredictable error surface and weight initialization. Hence, to preheat the network, the initial learning rate of the first stage is set to a value that can adapt to a steep gradient. Due to the considerable data skew, the training algorithm tends to fall into a local minimum where the network predicts all inputs as negative examples. A conservative learning rate is not sufficient to escape from this type of local minimum. Therefore, the initial learning rate of the second stage can be increased appropriately to search better minimums and further reduce the time cost of training. A robust strategy is required at the end of training to avoid the strong sway phenomenon. The details of the learning strategies are available in our software package. The effect of the softmax threshold DCS-SI tends to predict residues as non-binding residues because the proportion of positive and negative samples in each batch is maintained at approximately the natural proportion. For the binary classification model, the threshold of positive and negative samples has a nonnegligible impact on performance. As shown in Table 4, despite losing some precision, MCC and recall increase with the decreasing threshold, where the threshold is the minimum probability required for a sample to be predicted as positive. When the threshold = 0.4, the MCC achieves local optimization. Comparison with other methods stdDCS-SI and the baselines are tested on SITA and three extended testing sets. The existing 3D-structure-based methods within the baselines (TM-SI, COF and COA) outperform the sequence-based method S-SI on the testing sets. stdDCS-SI is far superior to all the baselines. The improvements of MCC and precision are no less than 0.05 and 15%, respectively. One possible reason for the moderate recall of stdDCS-SI is that the low percentage of binding residues in the training sets leads to prudent prediction of stdDCS-SI. Improving the recall of stdDCS-SI is a topic for future research. The details are described in Table 5, where the hyperparameters are locally best adjusted for stdDCS-SI (k = 5, N = 10 and threshold = 0.4). All the baselines used in the experiments are included in the I-TASSER Suite [31]. All the features used in this study are obtained from sequence or evolution information through computational methods. However, noise is introduced by the predictions of some features, including secondary structures and dihedral angles. The performance of stdDCS-SI will improve if these features are more accurate. Comparison of stdDCS-SI and enDCS-SI The residues adjacent to binding residues have a higher probability of binding than do the other residues. stdDCS-SI does not explicitly consider the aggregation of binding residues. The consideration of aggregation is implicitly included in the transformation of the hidden representation, which is one reason for the good performance of stdDCS-SI. Furthermore, enDCS-SI predicts the binding residues with aggregation explicitly. The decoder network of enDCS-SI can extract useful information from the context prediction. As shown in Table 6, the MCC for each testing set is improved 0.01 ∼ 0.02 by enDCS-SI. Although enDCS-SI requires more time to execute the additional forward propagation in its decoder network, the total time cost of enDCS-SI is not significantly increased. During testing, the predictions for every residue can be executed in parallel. Only the two forward propagations are processed serially. The advantage of enDCS-SI is more prominent if the input amino acid sequences are long and the machines have sufficient computational capacity. The effect of data augmentation Data augmentation typically contributes to the generalization of deep neural networks. To achieve better generalization, we use redundant proteins to obtain the augmented training sets (Aug-Train). The pairwise sequence identity is allowed to be 100% in Aug-Train, which contains at least nine times as many proteins as the original training sets. As shown in Table 7, the model trained on Aug-Train has slightly better generalization performance on the testing sets. However, the computational cost has increased several times. Relative to the cost, the improvement from data augmentation is far less than expected. This counterintuitive result indicates that proteins with a high sequence identity contribute little to the generalization of the network. Therefore, data augmentation based on high redundancy is not suitable as the main optimization method in this study. Discussion The effective utilization of data contributes to the improvement. Traditional classifiers are used in many existing methods, where the classifiers include SVM and traditional artificial neural networks (ANN). The input features for these classifiers are designed manually, and transformations in these classifiers focus on how to separate the input samples. Further feature extraction is inadequate, which limits the representation and generalization of these classifiers. Deep convolutional neural networks take advantage of massive sequence information. The hierarchical structure has the ability to extract low-level features and to organize low-level features as high-level features. The representation ability of the hierarchical features improves with the increase in layers, which requires sufficient data to ensure generalization. Currently, massive sequence information satisfies this requirement. In addition to the representation ability, the hierarchical structure provides the ability to capture long-distance dependencies. Without segmentation, the maximum distance of dependencies is not limited to the window size. Long-distance dependencies can be reflected in high-level features with a sufficiently large input field. Most traditional machine learning methods are sensitive to data skew, which fundamentally affects the generalization. The number of binding residues is far less than that of non-binding residues in our datasets, especially for metal ions and acid radical ions. The proportion of binding residues in the datasets is no more than 4%. We have attempted to replace the network in DCS-SI with SVMs. However, SVMs make the normal convergence on the unsampled training sets difficult. Even if the SVMs converge normally, their generalization is challenging. By contrast, the representation of DCS-SI is sufficiently strong to capture effective features for fitting and generalization without sampling. Training without sampling allows the network to learn more valid samples, which also contributes to the improvement. DCS-SI is better than the baselines in terms of predicting the binding residues of metal ions and acid radical ions. As shown in Table 5, the performance of the baselines decreases when metal ions and acid radical ions are added to SITA. The decrease in MCC is 0.03 ∼ 0.04. Regardless, the performance of DCS-SI remains close to its original level, and the MCC of DCS-SI decreases by no more than 0.02. The contrast indicates that the superiority in predicting the binding residues of metal ions and acid radical ions is a direct source of the improvement. Conclusion We propose a sequence-based method called DeepC-SeqSite (DCS-SI), which introduces deep convolutional neural networks for protein-ligand binding residue prediction. The convolutional architecture effectively improves the predictive performance. The highlights from DCS-SI are as follows: 1. The convolutional architecture in DCS-SI provides the ability to process variable-length inputs. 2. The hierarchical structure of the architecture enables DCS-SI to capture the long-distance dependencies between the residues, and the maximum length of the dependencies can be precisely controlled. 3. Augmentation of the training sets slightly improves the performance, but the computational cost for training increases several times. 4. Without using any template including 3D-structure data, DCS-SI significantly outperforms existing sequence-based and 3D-structure-based methods, including COACH. In future work, we plan to access the residues correlation at long distance by various attention mechanisms. Furthermore, the application of finite 3D-structure data to deep convolutional neural networks may effectively improve the protein-ligand binding residue prediction performance. Generative adversarial nets is a method that is worth applying to attempt to solve the severe deficiency of 3D-structure data relative to sequence data [40]. Endnotes 1 HEM contains HEM and HEC. 2 As mentioned in "Method outline", the input sequences have been padded. 3 Due to the constraint of resources and cost, deeper networks have not been tested.
7,834.4
2019-02-26T00:00:00.000
[ "Computer Science", "Biology" ]
Comparison between SAR Soil Moisture Estimates and Hydrological Model Simulations over the Scrivia Test Site In this paper, the results of a comparison between the soil moisture content (SMC) estimated from C-band SAR, the SMC simulated by a hydrological model, and the SMC measured on ground are presented. The study was carried out in an agricultural test site located in North-west Italy, in the Scrivia river basin. The hydrological model used for the simulations consists of a one-layer soil water balance model, which was found to be able to partially reproduce the soil moisture variability, retaining at the same time simplicity and effectiveness in describing the topsoil. SMC estimates were derived from the application of a retrieval algorithm, based on an Artificial Neural Network approach, to a time series of ENVISAT/ASAR images acquired over the Scrivia test site. The core of the algorithm was represented by a set of ANNs able to deal with the different SAR configurations in terms of polarizations and available ancillary data. In case of crop covered soils, the effect of vegetation was accounted for using NDVI information, or, if available, for the cross-polarized channel. The algorithm results showed some ability in retrieving SMC with RMSE generally <0.04 m/m and very low bias (i.e., <0.01 m/m), except for the case of VV polarized SAR images: in this case, the obtained RMSE was somewhat higher than 0.04 m/m (≤0.058 m/m). The algorithm was implemented within the framework of an ESA project concerning the development of an operative algorithm OPEN ACCESS Remote Sens. 2013, 5 4962 for the SMC retrieval from Sentinel-1 data. The algorithm should take into account the GMES requirements of SMC accuracy (≤5% in volume), spatial resolution (≤1 km) and timeliness (3 h from observation). The SMC estimated by the SAR algorithm, the SMC estimated by the hydrological model, and the SMC measured on ground were found to be in good agreement. The hydrological model simulations were performed at two soil depths: 30 and 5 cm and showed that the 30 cm simulations indicated, as expected, SMC values higher than the satellites estimates, with RMSE higher than 0.08 m/m. In contrast, in the 5-cm simulations, the agreement between hydrological simulations, satellite estimates and ground measurements could be considered satisfactory, at least in this preliminary comparison, showing a RMSE ranging from 0.054 m/m to 0.051 m/m for comparison with ground measurements and SAR estimates, respectively. Introduction Soil moisture content (SMC), along with its temporal and spatial distribution, is widely considered as a key variable in numerous environmental disciplines, especially in climatology, meteorology, hydrology and agriculture.For hydrological and agricultural purposes, the SMC plays a fundamental role, since it controls the water available for vegetation growth [1,2], as well as the recharge of deep aquifers [3].In meteorology, the SMC has a great impact on the energy transfer from surface into atmosphere by regulating the evapotranspiration [4].Moreover, a timely and precise SMC knowledge has a significant impact in various risk management applications, such as drought and flooding prediction and management [5]. Due to high variability of SMC in time and space, proper estimation of this variable is quite challenging.Ground measurements and remote sensing methods can be considered powerful tools for the SMC quantification.Ground measurements, such as those obtained by using calibrated probes (e.g., those based on Time Domain Reflectometry (TDR) techniques), can provide reliable point-scale measurements and, in case of distributed sensors, can also help in understanding the soil moisture patterns across-scales [6,7].However, when the catchment or basin scale is considered, the information needs to be spatially distributed and, in this case, ground measurements are not suitable, since their extension to a larger scale is very expensive and time-consuming, thus not being affordable from an economic and manpower point of view.On the other hand, microwave remote sensing techniques can allow detecting SMC at a basin scale. Regarding the spatial resolution, the SMC estimates from microwave remote sensing can span from tens of meters up to 50 km, whereas, the highest temporal resolution can be achieved with monthly or bimonthly acquisitions.Low spatial resolution estimates can be instead available worldwide on a daily basis.Upcoming sensors such as Sentinel 1 and SMAP will represent a further step to overcome these limitations.Sentinel 1 will work at C-band with a rather high spatial resolution of 5 m × 20 m and the temporal repetition frequency of 5-6 days over the European continent and 12 days for global acquisitions.Moreover, recent studies based on Sentinel 1-like data indicated that the improved radiometric resolution of Sentinel 1 may also produce a reduction in the retrieval errors on SMC [14]. It is also worthwhile mentioning the NASA Soil Moisture Active Passive (SMAP) mission that will offer the uniqueness of radar and radiometric simultaneous observations at L-band, with a ground resolution of around 1 km and a temporal resolution of 3 days [15]. It should be noted that all microwave sensors are able to estimate SMC referring to the first few centimeters of soil only.One proposed solution to improve the spatial and temporal resolution of available SMC information and to simulate SMC for deeper soil layers is related to the assimilation of SMC, derived from remote sensing data, into hydrological and land surface models [5,16].The main aim of this procedure would be to update and/or calibrate intermediate or final states of the model variables; thus, obtaining an improved estimation of water discharge and/or atmospheric drivers, as a major output.Notable improvements have been made in the model assimilation scheme, especially in view of assimilating long-time series of SMC estimates.Reichle and Koster [17] assimilated the Global Soil Moisture Data Bank into NASA catchment land surface model, reaching an improvement in the annual cycle of surface and root zone SMC in comparison with ground data.Also the temporal behavior showed reduced but significant improvement in the correlation with ground measurements. Crow and Ryu [18] proposed a new algorithm to improve the forecast of run-off through the assimilation of soil moisture values in a sequential way.The work demonstrated that the assimilation can improve the retrieval of both pre-storm soil moisture conditions and storm-scale rainfall accumulations. Draper et al. [19] have focused their work on the assimilation of available data sets from passive (AMSR-E) and active (ASCAT) sensors into the NASA catchment land surface model.The impact of assimilating each dataset on the modeled soil moisture skill was evaluated using in-situ soil moisture observations in the SCAN/SNOTEL network in the US and the Murrumbidgee Soil Moisture Monitoring Network in southeast Australia.Their research demonstrated that the combined used of SMC estimates from active and passive sensors produced an increase in the retrieval accuracy for each land cover class, with significant improvements for both root-zone and surface soil moisture over croplands, grasslands, and mixed cover. A necessary step before data assimilation is the comparison between SMC estimated by models and by remote sensing data, in order to verify the compatibility between these two sources of information [16][17][18][19][20].This step is essential to better understand if the back-propagated SMC simulated by the model can be then compatible with remote sensing estimates.Moreover, at local scale, some differences due to human interventions need to be properly evaluated in both model and remote sensing estimates, such as the presence of tillage activities.In this view, in fact, Pellenq [21] indicated that it is essential to accurately understand all the processes involved in the soil moisture variability as well as their scale interactions.In the study of Mattia [22], hydrological models were used to provide a-priori information in the retrieval process of SMC from remotely sensed data to help disentangling the effect of other variables (roughness and vegetation). Vischel [23] proposed a comparison of two independent methods for SMC estimation on a regional size catchment in South Africa (Liebenbergsvlei, 4,625 km 2 ).The first estimates were derived from the physically-based distributed hydrological model TOPKAPI [24], while the second set of estimates was derived from the scatterometer on board the European Remote Sensing satellite ERS.The analysis, carried out over two selected seasons of 8 months, showed a good correspondence between the modeled and remotely sensed soil moisture, with determination coefficients, R 2 , lying between 0.68 and 0.92.In [25] an extensive comparison of meteorological models, such as MM5 and Noah, with simulated and real SMC estimates from ASAR data, with a focused analysis on the related uncertainties, has been proposed. In this paper, temporal evolutions of SMC measured on ground, and SMC derived from both SAR data and a hydrological model have been compared to each other, in order to mainly address the temporal compatibility of the two estimated SMC values.This multiple correlation between SMC estimated through SAR data, SMC obtained from the hydrological model, and SMC measured on ground, was carried out with the double purpose of, on one side, testing the ability of these two approaches in simulating the real SMC, and, on the other hand, checking the possibility of using a rather simple hydrological model for spatially and temporally extrapolating SMC, whenever SAR data are missing.The paper is organized as follows.In Section 2, the test sites and the available datasets are described.Section 3 describes the retrieval process used to estimate SMC from SAR images, while Section 4 introduces the proposed hydrological model.Comparison results are discussed in Section 5. Section 6 draws conclusions, possible applications and future works. Test Site and Available Data Sets The investigation was carried out on the Scrivia test site, which is located in North-West Italy (central coordinates: 45°N, 8.80°E) (Figure 1).It is a flat agricultural plain of about 300 km 2 , situated close to the confluence of the Scrivia and the Po rivers.The site is characterized by large homogeneous agricultural fields of wheat, corn, sugarbeet, and potatoes [26].The weather is generally cloudy and rainy in spring and fall, with average SMC > 0.30-0.35m 3 /m 3 , and sunny and dry in summer, with average SMC < 0.15-0.20 m 3 /m 3 .According to the crop calendar of this area, in fall (October and November) most fields were bare and with SMC > 0.20-0.25 m 3 /m 3 , whereas in spring (March, April) almost half of the agricultural area was covered by growing wheat.The other half consisted of very rough bare fields, waiting for the seeding of corn.In spring SMC was usually rather high (>0.30m 3 /m 3 ) due to the frequent rainfall.In May corn was sowed in very smooth fields.These fields were irrigated and therefore their SMC was highly variable.In June-July, the SMC was usually very low (0.10-0.15 m 3 /m 3 ) due to the absence of rainfall, except in the irrigated fields.Wheat was in the ripening phase in June and harvested at the beginning of July. ENVISAT/ASAR images were mainly collected from 2003 to 2009 in both HH/HV and VV polarizations and at an incidence angle of 23°.In Table 1 a list of available ENVISAT/ASAR images and their configuration is shown.Simultaneously with satellite acquisitions, ground campaigns were carried out in the sub-area of "Castelnuovo Scrivia".The ground measurements, mainly collected on 23 "reference" fields, involved all the significant vegetation and soil parameters, such as plant density, leaf and stalk dimensions, the number of leaves per plant, plant water content, SMC, and surface roughness.At least 5-6 samples of SMC (measured with TDR probes, which measure an average SMC of about 10-15 cm, depending on the soil density) and vegetation were collected for each field considered, while surface roughness was measured with a needle profilometer along and across the rows [26]. Retrieval Approach for Estimating SMC from SAR Images The algorithm used for estimating SMC has already been described in [26,27], and it is based on an artificial neural network (ANN) approach.The ANN is a feedforward multilayer perceptron (MLP), with two hidden layers of ten neurons each [28,29].The algorithm was implemented within the framework of an ESA project (4000103855/11/NL/MP/fk) concerning the development of an operative algorithm for the SMC retrieval from Sentinel-1 data. Studies carried out in the past pointed out that the main constraint for obtaining a good accuracy with ANN approaches is the "robustness" of the training process, which has to be representative of a variety of surface conditions as wide as possible.In order to meet these requirements, the dataset implemented for the ANN training was obtained by combining experimental satellite measurements of backscattering coefficients (σ°) and corresponding ground parameters, derived from the archives available at IFAC and EURAC.Since these datasets were not sufficiently wide for training the ANN and completely setting the neurons and weights, data simulated using electromagnetic forward models have been included in the training set.The backscattering of the bare rough surfaces was obtained using the Advanced Integral Equation Model (AIEM) and Oh model [30][31][32], while the contribution of light vegetation was accounted for by using the "Water Cloud Model" [33][34][35], deriving the information on vegetation water content from the NDVI trough a semi-empirical relationship.Minimum and maximum values of the soil parameters measured during the experimental campaigns (soil moisture and surface roughness) were considered in order to define the range of variability of each soil parameter.The input parameters were the incidence angle (between 20° and 50°), the soil height standard deviation (Hstd, between 1 and 3 cm), the correlation length (Lc, between 4 and 8 cm), the dielectric constant (derived from SMC values between 5% and 45%), and NDVI (between 0.2 and 0.8).Since the relationship between Hstd and Lc is rather complicated and it is difficult to obtain reliable measurements of the Lc parameter, we decided to keep these two quantities independent, associating one random variable with each of these.The consistency between experimental data and model simulations was verified before including the simulated data (more than 10.000 data) in the training set.The ANN training was carried out by considering the simulated backscattering at the various polarizations and the incidence angle as input of the ANN, and the soil parameters as outputs. After training, the ANNs were tested on a different dataset that was obtained by re-iterating the model simulations [27].Six ANNs were defined and trained specifically, in order to cover all the possible combinations of input data (i.e., 1. VV polarization only, without NDVI; 2. HH polarization only, without NDVI; 3. VV polarization and NDVI; 4. HH polarization and NDVI; 5. VV and VH polarizations; 6. HH and HV polarizations).If available, the cross-polarized channel was considered instead of NDVI to account for the effect of dense vegetation cover.The training was carried out by considering the EO data (measured or modeled) as ANN inputs and the SMC as output. After training, the ANNs were tested on a different dataset, obtained by re-iterating the model simulations.The most favorable results were obtained when co-and cross-polarizations were available, showing a determination coefficient (R 2 ) equal to 0.80, and RMSE < 0.04 m 3 /m 3 [27].The ANN algorithm was tested and validated in six main test sites (four in Italy, one in Australia, and one in Spain), where SAR images and simultaneous ground truth data had been collected for several years.Scrivia, Matera, and Spanish sites were agricultural areas, whereas Cordevole and Alto Adige were mountainous sites.The Australian one, which was chosen in order to test the algorithm in meteorological and climatic conditions far from the Italian sites, was characterized by natural pastures and agricultural fields.Detailed descriptions of these areas are given in [26,27].By using the ANN algorithm, a series of SMC maps of the area of Scrivia was generated from the available ENVISAT/ASAR images of Table 1.The 11 derived maps are shown in Figure 2.Although SMC data measured on ground are available for only a portion of the image, we can note that the variations of SMC are in agreement with the season, showing a general lower value in summer and a higher value in fall and spring, when rainfall is significant (average monthly rainfall > 100 mm, especially in spring).The areas where SMC the estimate of is not likely, i.e., urban areas, water bodies, forests, and dense vegetation, were masked with the help of a land use map, combined with a threshold derived from NDVI data.The latter allowed the masking of dense vegetation, i.e., areas covered by agricultural or natural herbaceous vegetation, dense enough to hamper the SMC retrieval (that of course depended on the growing cycle and changed in time).In the red circle, the area where ground measurements were collected is shown. In Table 2, a comparison between SMC values, measured on ground and estimated from SAR data, is shown.The SMC values (both estimated and measured) were averaged over a portion of the image (marked with a red circle) corresponding to the area where ground truth data were collected.The statistical parameters of the regression between these two datasets, although made up by few points, are the following: slope = 0.964, R 2 = 0.91, RMSE = 0.023 m 3 /m 3 (p < 0.05).The aim of Table 2 was to support the SMC maps of Figure 2, providing an average seasonal trend of SMC.However, looking at a field by field comparison, represented in the diagram of Figure 3, the following regression line between SMC estimated (SMC SAR , in m 3 /m 3 ) and SMC measured (SMCmeas, in m 3 /m 3 ) was obtained: SMC SAR = 0.85SMCmeas + 3.2 (R 2 = 0.74, RMSE = 0.036 m 3 /m 3 , and p < 0.05) Description of the Hydrological Model In this section, the results of a comparison between SMC SAR estimates and some hydrological model simulations are presented with reference to the test site of Scrivia. We used the simple daily step model, described in [36], based on a one-layer soil water balance model, which was found to be able to reasonably reproduce soil moisture variability and has the advantage of describing the top soil layer with simple formulations.The model considers a single, homogeneous, well mixed soil layer for which daily water balance is computed accounting for precipitation, runoff, gravity-driven infiltration and actual evapotranspiration.The model ignores the effect of groundwater, which is usually acceptable for not extremely shallow soils.This simplification should not be applied whenever topsoil moisture is controlled by capillary rise from the water table.In many cases, however, including those considered in the present paper, it may be safely assumed that the water table is sufficiently deep to exclude any significant effect on topsoil moisture.We provide hereafter a short description of the model, extracted from Appendix A in [36]. The model computes soil volumetric water content variations (1/day) at daily time step as: In the above expressions, θ FC = soil volumetric water content at field capacity (−), θ WP = soil volumetric water content at wilting point (−), PET = potential evapotranspiration (mm/day), L = soil thickness (mm), θ = soil volumetric water content (−), θr = residual soil volumetric water content (−), I ex = infiltration excess, AET is the actual evapotranspiration, and K(θ) is the saturation-dependent hydraulic conductivity.The model assumes that drainage of the topsoil follows gravity only.Moreover, drainage is not allowed to exceed θ − θ FC during one time step.Actual evapotranspiration, AET (mm/day), is: where β is a function that accounts for soil water content during reduced evapotranspiration.In the present formulation, we adopt the SWAT model formulation [37]: Saturation-dependent hydraulic conductivity K(θ) is: which is the well-known Mualem-Van Genuchten model [38,39] with tortuosity parameter τ = 0.5, where Θ θ θ θ θ K sat = saturated hydraulic conductivity, mm/day, n = exponent in Van Genuchten soil water retention curve model.Infiltration excess is: where Sc represents the storage capacity of the soil surface; in the present case, a value of 10 mm was assumed by default; the geometric mean of K sat and K(Θ) represents an infiltration capacity, which needs to be higher than K(Θ), in order to allow infiltration when soil is in dry conditions.Water in excess of θ s is computed as: S ex = max(0,P − I ex − AET− ΔθL − K(Θ)) Runoff (RO) is computed as: RO = S ex + I ex Infiltration (F) is given by: F = max(0, P − AET − ΔθL − RO) Day by day, soil water content is updated on the basis of the above calculations.Besides the parameters representing physical characteristics of the soil, which can be in principle determined by experimental measurements, the model requires input of the L parameter (soil thickness). The minimum set of parameters required as input includes precipitation, mean, minimum and maximum temperature at daily steps and an indication of soil texture.Potential evapotranspiration is estimated using the well-known Hargreaves-Samani formula [40].The hydraulic behavior of the top soil layer is described using parameters estimated on the basis of soil texture. Soils in the test area are predominantly loamy-sands (sand 51%, clay 13%, silt 36%) with a mean bulk density of 1.18 kg/L, according to the soil map of Regione Piemonte (www.regione.piemonte.it)that was used for this study.Data on precipitation and temperature were obtained from the Agenzia Regionale per la Protezione Ambientale (ARPA) Piemonte station of Castelnuovo Scrivia for the period of interest.Taking precipitation and temperature data from a single station implies ignoring the spatial variability of these parameters, and assuming that the station is representative of the whole area.It is well known that precipitation and temperature vary significantly in space, and such assumption would not be suitable when modeling large catchments, especially with complex topography.For the purposes of our analysis, however, this statement can be considered acceptable, as the spatial extent considered is rather limited, and the local topography is very simple.On the other hand, including data from other, far away measurement stations would introduce extrapolation errors that are not desirable in this context. Knowing soil properties, hydraulic parameters can be indirectly estimated using pedotransfer rules or expert systems, such as the popular artificial-neural-network-based ROSETTA (http://cals.arizona.edu/research/rosetta/).After estimating hydraulic properties and the respective standard errors, the ensemble of soil moisture time series, corresponding to all possible combinations of the mean values and values at the extremes of the range for the parameters, may be easily derived.Results of the ensemble of model predictions, measurements, and earth-observation-based estimates will be compared. Comparison of Results and Discussion For the test site of Scrivia, 11 processed satellite images (see Table 1) were available, as well as corresponding ground measurements of soil moisture for the same days.The comparison with the hydrological model simulations is not a validation stricto sensu, but rather a "soft" validation or additional test that served to complement the comparison with ground data.The model provided a continuous simulation of soil moisture that was shown to be in general agreement with the observations.Available SMC measurements could also be used to calibrate the model.Once the model was calibrated for the sites on which data were available, the predicted SMC was compared with the SAR simulated data.The SMC values, both measured and simulated, have been averaged over the area where ground measurements were gathered (see Figure 2).Since the model referred to the average SMC within the soil layer, while SAR SMC products reflected only shallow soil conditions, a careful examination of the two variables needed to be performed. The comparison was carried out by running the hydrological model with parameters for all soil textural classes present in the study area, and by considering the variability of the soil hydraulic parameters estimated by ROSETTA.The latter considered two soil depths: 30 and 5 cm, as depicted in Figure 4, in which the average and the 95% confidence interval of the resulting simulation ensemble for the two depths were indicated.As clearly appearing from both figures, the 30 cm simulations indicate that SMC values were higher than the satellites estimates, which naturally refer only to the upper layer of soil.In the 5-cm simulations, instead, the agreement between hydrological simulations, satellite estimates and ground measurements may be considered satisfactory.A comparison between the available data was carried out and shown in Figure 5a, where the SMC estimated with the hydrological model (SMC mod ) was directly compared to SMC measured on ground (SMC meas ).Subsequently, a comparison between SMC mod and SMC estimated from SAR data (SMC SAR ) was also carried out (Figure 5b).The obtained regression lines for both diagrams are the following: • Hydrological model: SMC mod = 0.84SMC meas + 0.044 (R 2 = 0.55) • SMC SAR = 0.638SMC mod + 0.08 (R 2 = 0.54) Considering the low number of available measurements, these correlations have been found significant, with 95% confidence level (p-value).In Table 3, R 2 , slope, RMSE, and p of all the correlation carried out between SMC measured on ground, estimated from SAR data and from the model at two depths (5 and 30 cm) are shown.We can note that the best correlation was obtained by directly comparing SMC SAR and SMC meas and the worst, at least in terms of RMSE, between SMC SAR and SMC mod at 30 cm.It can be observed that the SMC can be better approximated by the hydrological model at 5 cm.The RMSE values range from 0.051 and 0.054 m 3 /m 3 for SMC estimated with the hydrological model (at 5 cm depth) and SMC measured on ground and simulated from SAR data, respectively.Although high spatial resolution products, such as SAR images, usually show a low revisit time, thus hampering their use for simulating soil moisture dynamics, they can be valuable for testing hydrological models and, in particular, hydrological patterns as well as basic assumptions of the model itself.In this view, the test of the soil moisture product with independent hydrological simulations can be considered an interesting result. Conclusions and Future Works It is well known that microwave remote sensing techniques can provide rather accurate estimates of soil moisture content (SMC).However, the SMC obtained in this way only refers to the first centimeter layer of the soil, thus limiting its assimilation into hydrological models. In this paper, a comparison between SMC obtained from SAR images, through an inversion algorithm based on an Artificial Neural Network (ANN) approach, and SMC estimated from a hydrological model was performed.The outputs of two models were subsequently compared with field measured SMC.The hydrological model estimated SMC of the two different depths: 30 cm and 5 cm.The first output tended to overestimate the SMC values obtained from SAR images, which, as expected, simulated a shallow SMC.The result of the hydrological model for the first 5 cm depth was instead much more in agreement with satellite data.The RMSE values of these comparisons were 0.052 m 3 /m 3 for the SMC estimated from the hydrological model and 0.023 m 3 /m 3 for the SMC estimated from SAR data. It is highlighted that products derived from high-temporal frequency satellite images at low spatial resolution have already been used for the assessment of the temporal dynamics of soil moisture.On the other hand, high spatial resolution products, such as those considered in this work, which present lower temporal frequency (and consequently are of limited importance with respect to soil moisture dynamics), may be extremely valuable for testing hydrological patterns and basic assumptions of the models, such as hydrological connectivity and similarity.For these reasons, a deeper investigation on the reliability and compatibility of the soil moisture products derived from SAR images, by using independently derived hydrological simulations, have an important role in hydrological research. A further comparison between SAR SMC estimates and hydrological model simulations over the Scrivia test site was carried out.The hydrological model reproduced similar values of SMC as compared to the ANN algorithm outputs and ground measurements, provided that the soil layer considered was of the order of only a few centimeters. The found accuracies of the model simulations, the SAR estimates, and the ground measurements indicate that most of them are within the requested accuracies for satellite products of soil moisture, which, in case of GMES Sentinel-1, is ≤0.05 m 3 /m 3 .This result supports the idea that the model simulations may be used as a substitute in case of missing SAR data of dense temporal series or for extending the point-scale measurement of SMC to a more distributed and larger spatial scale. The comparison conducted in this research can be considered a preliminary exercise, while comparisons with more complex spatially-explicit models should be expanded during future research. Figure 1 . Figure 1.Map of Northern Italy.The red star represents the test area of the Scrivia. Figure 2 . Figure 2. Soil moisture content (SMC) maps in (m 3 /m 3 ) obtained through the Artificial Neural Network (ANN) algorithm by using ENVISAT/ASAR images collected on the Scrivia area (central coordinates: 45°N-8.80°E).Masked areas are: white = urban, magenta = forests, dark green = dense vegetation, blue = open water.The dimensions of the images are 20 × 20 km.In the red circle, the area where ground measurements were collected is shown. Figure 3 . Figure 3. SMC estimated by the algorithms (SMC SAR , in m 3 /m 3 ) on all the available fields of the Scrivia area as a function of the SMC measured on ground (SMCmeas, in m 3 /m 3 ).The continuous line represents the regression equation of the dataset. Figure 5 . Figure 5. (a) SMC estimated with the hydrological model (SMC mod , in m 3 /m 3 ) compared to SMC measured on ground (SMC meas , in m 3 /m 3 ).(b) SMC estimated from SAR (SMC SAR , in m 3 /m 3 ) compared to SMC estimated by the hydrological model (SMC mod , in m 3 /m 3 ). Table 1 . ENVISAT /ASAR acquisitions over the Scrivia test site (APP: Alternate Polarization Precision Image, IMP: Image Mode Precision, IMS: Image Mode Single Look Complex). Table 2 . Comparison between the average SMC values estimated from the backscatter of the images of Figure2(SMC SAR , in m 3 /m 3 ) and the corresponding SMC values measured on ground (SMCmeas, in m 3 /m 3 ), averaged on 23 fields. Table 3 . Statistical parameters (R 2 , Slope, RMSE, in m 3 /m 3 , and p) of all the performed regression equations between SMC estimated from SAR (SMC SAR ) data and from the hydrological model (SMC mod ) at two depths (5 and 30 cm), and the SMC measured on ground (SMC meas ).
7,022.4
2013-10-11T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Zero Watermarking Algorithm for Vector Geographic Data Based on the Number of Neighboring Features : Zero watermarking is an important part of copyright protection of vector geographic data. However, how to improve the robustness of zero watermarking is still a critical challenge, especially in resisting attacks with significant distortion. We proposed a zero watermarking method for vector geographic data based on the number of neighboring features. The method makes full use of spatial characteristics of vector geographic data, including topological characteristics and statistical characteristics. First, the number of first-order neighboring features (NFNF) and the number of second-order neighboring features (NSNF) of every feature in vector geographic data are counted. Then, the watermark bit is determined by the NFNF value, and the watermark index is determined by the NSNF value. Finally, combine the watermark bits and the watermark indices to construct a watermark. Experiments verify the theoretical achievements and good robustness of this method. Simulation results also demonstrate that the normalized coefficient of the method is always kept at 1.00 under the attacks that distort data significantly, which has the superior performance in comparison to other methods. Introduction Vector geographic data is one of the most important production materials in information society [1]. It is an inevitable requirement to ensure the security of vector geographic data in order to develop geographic information systems (GIS) industries. As a frontier technology for information security, digital watermarking plays a critical role in copyright protection and content authentication of vector geographic data [2][3][4][5][6]. Particularly in terms of copyright protection, zero watermarking has gained more and more attention. It is a kind of watermarking technology that does not cause any modification to the host data [7]. Zero watermarking constructs watermark by means of quantifying the characteristics of the data and then registers the watermark and additional information to a third-party intellectual property rights (IPR) repository. Therefore, compared with the traditional embedding watermarking [8], zero watermarking has no damage to data accuracy, which can be applied for vector geographic data with high-precision requirements. It balances the contradiction between the invisibility and the robustness of watermarking [9]. The robustness of watermarking refers to the ability to detect the watermark information from watermarked data after being attacked [10,11]. However, there are many kinds of attacks for vector geographic data, such as geometrical attacks, vertex attacks, and object attacks [12,13]. These attacks will damage data from different perspectives, thereby affecting the synchronization of watermark information. It puts forward higher demands for the robustness of zero watermarking. Thus, how to improve the robustness is a hotspot in the current research of zero watermarking for vector geographic data. The existing zero watermarking methods for vector geographic data can be divided into two types. The first type is the method based on attribute characteristics [14][15][16][17]. It quantifies the descriptive information of vector geographic data (such as what, why, and when) to construct a zero watermark. For example, scholars select element coding [14], stroke width [15], color [16], and map symbols [17] as the attribute characteristic. If the attribute information is of a numeric type, it can be directly quantified to watermark information. If it is of a text type, some approaches of converting text to numbers need to be used, such as encoding and statistics. Generally, this type of method has strong robustness and can perfectly resist geometrical attacks, including rotation, scaling, and translation (RST) attacks. This is because RST attacks only change coordinates, not attributes. However, the method has high requirements for the integrity of the attributes. Only the vector geographic data whose attributes meet certain conditions can implement watermark embedding. Moreover, the attribute information of vector geographic data differs from the production stage and the application scenario. Therefore, this type of method has significant application limitations. The second type is the method based on spatial characteristics [8,9,[18][19][20][21][22][23][24][25][26][27][28][29][30][31][32]. It quantifies the coordinates of vector geographic data to construct the zero watermark. There are two ways to use coordinates. One is to use the coordinate directly. For example, literature [21] compares the coordinates of vertices to obtain a Boolean sequence and then quantify the sequence to a watermark that contains only zero and one. Literatures [9,18,24] count the number of vertices that meet certain spatial location conditions. Another is to use the coordinate indirectly. For example, scholars employ the angle [8,19,20,[29][30][31][32], distance [25,26], distance ratio [22,27], and topology [28] to construct a watermark, respectively. Compared with attribute characteristics, spatial characteristics are the basis of vector geographic data. Therefore, the method based on spatial characteristics solves the application limitation of the first type. Besides, most of them can resist common geometrical attacks. However, the watermark synchronization of this method is strongly coupled with the coordinates, so it is difficult to resist the attacks with significant distortion, such as non-uniform scaling and projection transformation attacks. Through the above analysis, it is found that the method based on attribute characteristics is robust to attacks related to coordinates but has many limitations in practical applications. The method based on spatial characteristics solves the issues of the former and can resist common geometrical attacks, but it is difficult to resist attacks that have significant distortion on the data. Therefore, how to find the characteristics that can completely resist data deformation is a critical scientific problem. To resolve the above problem, we propose a zero watermarking algorithm based on the number of neighboring features for vector geographic data. It changes the target of statistics from vertices to features and is fully integrated with topological relationships between features. The remainder of this paper is organized as follows. Section 2 gives the basic ideas and details of the algorithm. The experimental design is described in Section 3. Then Section 4 provides corresponding experimental results and analyses, and Section 5 gives some discussions. Finally, Section 6 draws the conclusion. Methodology The proposed method is based on the number of neighboring features of vector geographic data. Specifically, the polygon data is used as the watermarking target in this paper. For a polygon, it has first-order neighbors, that is, the polygons that are neighbors directly to the polygon. At the same time, it also has second-order neighbors that refer to neighbors of neighbors. The key to the method is how to use the number of neighboring features. Firstly, the number of first-order neighboring features (referred to as NFNF hereafter) and the number of second-order neighboring features (referred to as NSNF hereafter) are counted for every feature. Then, NFNF is quantified as the watermark bit, and NSNF is quantified to the watermark index. The watermark index represents the mapping relationship between the feature and the watermark bit. Finally, the watermark is constructed by combining the watermark indices and the watermark bits. Figure 1 shows the basic idea of the algorithm. The final constructed watermark can be encrypted with a secret key to enhance the security of the method. For example, the literature [28] uses a logistic mapping model to encrypt the watermark information. That is, the proposed method can be combined with the encryption method. This paper focuses on the procedure of constructing the zero watermark, so we do not integrate any encryption operations in the method. Neighboring Features The neighboring relationships can be divided into three types for polygons: (1) Overlapping neighbors: polygons that have all or part of their areas overlapping; (2) Edge neighbors: polygons that have common or touching boundaries; (3) Node neighbors: polygons that touch at a point (https://desktop.arcgis.com/en/arcmap/latest/tools/analysistoolbox/how-polygonneighbors-analysis-works.htm). Polygon A and polygon B demonstrate these three types of neighboring relationships in Figure 2. In the proposed method, if two features belong to any of the three types, they will be regarded as neighbors. Figure 3a shows the statistics of NFNF values, and the number in each polygon is its corresponding NFNF value. Similarly, Figure 3b shows the statistics of NSNF values. For a polygon, its NSNF is the sum of the NFNFs of all its neighbors and is calculated by the following equation. where n is the NFNF of a polygon and NFNF i means the NFNF of its i-th neighbor. In Figure 3, take polygon A in the upper left corner as an example. From Figure 3a, it is easy to see that polygon A has two neighbors: B and C, so the NFNF of polygon A is 2. Then, polygon B and polygon C have five and four neighbors separately, so the NSNF of polygon A is 9, as shown in Figure 3b. Clearly, the NSNF value is larger than the NFNF value for a polygon. Therefore, using the NSNF to quantify the watermark index can make the distribution of the watermark index more uniform. The Determination of the Watermark Bit In this paper, the watermark is a binary sequence with a fixed length [27]. Every watermark bit is zero or one. The watermark bit of a polygon is determined by quantizing its NFNF. The quantization is a process of mapping input values from a large set to output values in a smaller set. We use the following equation to quantify the NFNF. where Mod is the modulo operation. The Determination of the Watermark Index Similar to the determination of the watermark bit, the watermark index is obtained by quantizing the NSNF. Set the length of the watermark to N, so the watermark index of a polygon ranges from 1 to N. It can be calculated by the following equation. where Rand is a random number generator that creates uniformly distributed pseudorandom integers between 1 and N. The first parameter of Rand is the seed or start value of the random number generator. The watermark index of a polygon presents the position of its watermark bit in the final zero watermark. There will be a situation that the watermark indices of multiple polygons are the same. Then, the majority voting mechanism [12] is employed to determine the watermark bit on the watermark index. For a watermark index, if the number of watermark bit 1 is larger than that of watermark bit 0, the watermark bit will be recorded as 1. Otherwise, the watermark bit will be recorded as 0. Watermark Generation and Extraction In the proposed method, the watermark generation and the watermark extraction are exactly the same processes. However, they happen at different times. The watermark generation occurs in the initial stage of the algorithm to generate a watermark. However, the watermark extraction is executed when copyright infringement needs to be confirmed, which generates a watermark from suspicious data and identifies the watermark by comparing it with the watermark registered in the IPR repository. Denote the polygon data as P = {P 1 , P 2 , . . . , P M }, P i presents the polygon in i-th position by storage, and M is the total number of polygons. Firstly, count the NFNF and NSNF of P i . Then, get the watermark bit and the watermark index of P i according to Equations (2) and (3). The set of watermark bits is denoted by WB = {WB 1 , WB 2 , . . . , WB M }, and the set of watermark indices is denoted by WI = {WI 1 , WI 2 , . . . , WI M }. Next, introduce a set that all the elements are 0, denoted by S = {S 1 , S 2 , . . . , S N }. Reassign the value of S i by the following equation: Finally, construct the zero watermark by Equation (5) and denote it as W = {W 1 , W 2 , . . . , W N }. Datasets To evaluate the performance of the proposed method, we choose the administrative divisions of an area in China as our datasets. The data format of the datasets is shapefile, which is a common geospatial vector data format for GIS applications. Its projected coordinate system is Gauss Kruger, in which the datum is Beijing 1954 and the unit is the meter (m). The bounds in the x-axis and y-axis are [−1.6 × 10 5 , 5.3 × 10 4 ] and [3.8 × 10 6 , 4.0 × 10 6 ], separately. As shown in Figure 4, there are 729 features and 152701 vertices in the experimental data. Experiment Design and Implementation This section is to set up attack experiments with different types and intensities to verify the robustness of the proposed method. The following attack types are selected, as shown in Table 1. Some attacks will cause significant distortion in the data, such as non-uniform scaling in geometrical attacks, simplification in vertex attacks, and most projection transformation attacks. If an algorithm can resist the above attacks, it is proved that the algorithm solves the scientific problem of this article. That is, the algorithm finds the characteristics that can completely resist data deformation. The rest of the attacks are also common attacks in geographic analysis, and a qualified digital watermarking algorithm should be able to resist them. Detailed settings for every type of attack will be given later. Meanwhile, two algorithms are selected for comparison, referred to as Wang [24] and Li [28] separately. They are the representatives of the method based on spatial characteristics. The former uses coordinate directly. It constructs concentric rings and then quantifies the number of vertices in each ring to obtain a zero watermark. The latter uses coordinates indirectly. Firstly, it builds the graphical complexity index of the polygon, then calculates the spatial correlation coefficient based on the graphical complexity index. Finally, it quantizes the coefficients to obtain a zero watermark. Geometrical Attacks Generally, geometrical attacks include RST attacks. In particular, scaling attacks can be categorized into uniform scaling and non-uniform scaling. Uniform scaling means that the scaling factors in the x and y directions, denoted as Sx and Sy, are the same, while Sx is not equal to Sy in non-uniform scaling. In rotation attacks, the data is rotated clockwise with the data center by a rotation angle from 0 • to 360 • with the step of 60 • . In scaling attacks, the data is scaled with respect to the data center by scale factors: 0.4, 0.6, 0.8, 2, 3, and 4. For translation attacks, the data is translated by a distance, denoted by translation distance, at the same time in x and y directions. The translation distance ranges from 0 m to 600 m with a gap of 100 m. Some results after attacks are shown in Figure 5. One can see that non-uniform scaling in Figure 5d makes the shape of the features deformed. Vertex Attacks In this paper, vertex attacks include interpolation and simplification, which change the number of vertices in the data. In interpolation attacks, we use linear interpolation to insert vertices between adjacent vertices. When the distance between the x-coordinates or ycoordinates of adjacent vertices is greater than the defined tolerance (refers to interpolation tolerance), a new vertex will be inserted between the adjacent vertices. The interpolation tolerance ranges from 0 m to 600 m with the step of 100 m. On the contrary, simplification attacks remove vertices from the data, in which the Douglas-Peucker algorithm [12] is used. The simplification tolerance also ranges from 0 m to 600 m with the step of 100 m. Figure 6 shows the results after interpolation and simplification when the tolerance is 600 m, and the ones on the right are partial enlargements of the left. It is easy to see that the shape of the features in Figure 6b is greatly distorted in comparison to Figure 6a. Object attacks Object attacks in this paper include object addition and object deletion. The feature, as the minimum operating unit, is added or deleted in the data. Therefore, the number of features changes, but the shape of the features does not. In experiments, features are added or deleted sequentially from the bottom of the original data. Both the addition ratio and the deletion ratio range from 0% to 30% of the number of original features, with the step of 5%. Some visualization results after object attacks are shown in Figure 7. Projection Transformation Attacks Projection transformation is a process of projecting spatial data from one coordinate system to another [33]. Since the experimental data in this paper have a projected coordinate system, two types of projection transformation attacks are set. The first is from projected coordinate systems to geographic coordinate systems, in which three geographic coordinate systems are selected: Beijing 1954, Xian 1980, and China Geodetic Coordinate System (CGCS) 2000, numbered 1, 2, and 3. The second is from projected coordinate systems to other projected coordinate systems. We also choose three projected coordinate systems: cylindrical equal area, Lambert conformal conic and azimuthal equidistant, numbered 4, 5, and 6. The results after projection transformation attacks are shown in Figure 8. It is easy to see that except for Figure 8e, the shape of features is distorted substantially. Evaluation After extracting the zero watermark W from the suspicious data, we need to compare it with the zero watermark W that registered in the IPR repository. Normalized correlation (NC) is employed to evaluate the quality of the extracted results, and it is one of the most common evaluation methods [34]. The mathematical equation of NC is as follows: where N is the length of the watermark. NC ranges from 0 to 1, and the larger the NC, the greater the correlation between the extracted watermark and the registered watermark. In addition, the threshold of NC is introduced, which is an empirical value. When the NC is greater than the threshold, the watermark detection succeeds. Otherwise, it is considered a failure. In the proposed method, we set the length of watermark N to 32 and the threshold of NC to 0.75. Figures 9-12 show the results of geometrical attacks. Overall, three algorithms perform equally well in rotation, uniform scaling and translation attacks, where their NC values are all 1.00. However, in non-uniform scaling attacks, the differences between the three algorithms are obvious. As shown in Figure 11, Sx increases from 0.4 to 4, while Sy remains at 1. Therefore, from (0.4,1) to (1,1), the shape of features recovers from the flattened state in the x direction. However, from (1,1) to (4,1), the shape is elongated in the x direction. It can be observed that the NC values of the proposed method always keep 1.00, while that of Wang and Li increases and then decreases. Especially when the scaling factor is (0.4,1), (3,1) and (4,1), the NC values of Li fall below the threshold value, 0.75. Thus, it is proved that the proposed method can resist geometrical attacks completely. The Results of Object Attacks The results of object attacks are shown in Figures 15 and 16. The NC values of the three methods all change with different addition and deletion ratios. In object addition attacks, both the proposed method and Wang tend to decrease first and then increase, in which their lowest values are 0.87 and 0.97, separately, and Wang performs better. However, Li's NC value keeps decreasing, especially when the addition ratio is 30%, the NC value drops to 0.76, which is slightly higher than the threshold. In object deletion attacks, the three algorithms all show an overall downward trend. Wang's ups and downs are large, the proposed method is relatively flat, but both are above the threshold. Li also keeps dropping, but when the deletion ratio exceeds 20%, the NC value falls below the threshold. Overall, the performance ranking of the three is that Wang is better than the proposed algorithm, and the proposed algorithm is better than Li. Since the NC values of the proposed method are always above the threshold, the method can resist object attacks. Figure 17 shows the results of projection transformation attacks. Overall, the NC value of the proposed method maintains at 1.00, while that of Wang and Li is less than 1.00 except when the projection transformation number is 5. As mentioned in Section 3.2.4, projection transformation number 5 refers to projecting the data to Lambert conformal conic projection, where the shape of features does not change much. However, the shape of the features is distorted in other projection transformation attacks. The minimum NC values of Wang and Li are 0.92 and 0.73, respectively, when the projection transformation numbers are 4 and 6. Therefore, the proposed algorithm performs best, which shows that it can resist projection transformation attacks completely. Discussions The above experimental results suggest that the proposed method has strong robustness under various attacks, especially for attacks with significant distortion, such as non-uniform scaling attacks, simplification attacks, and projection transformation attacks. Further discussions will be given from three perspectives to understand the characteristics of the proposed method better. Local Characteristics NFNF and NSNF are the foundation of the proposed method to construct the zero watermark. Based on neighbors, the former determines the watermark bit. Based on neighbors of neighbors, the latter determines the watermark index. Therefore, both of them reflect the local characteristics of the vector geographic data. This is why the proposed method can resist object attacks. When a feature is added or deleted in vector geographic data, this only affects one local part of the data, while some parts can be retained without being damaged. That is, the watermark in these unattacked parts is preserved and can be detected successfully. Meanwhile, NFNF and NSNF are based on statistics of features, not vertices. Compared with vertices, the characteristics of the two are not so particularly local, but they are just right. This is the core reason why the proposed method can resist vertex attacks. In vertex attacks, the interpolation and simplification of vertices do not affect the neighboring relationship between features in vector geographic data, which is verified by Figures 13 and 14. In the two figures, with the addition and deletion of vertices, the NC value of the proposed method is always kept at 1.00. Thus, the two local characteristics, NFNF and NSNF, enhance the robustness of the proposed algorithm. Applied to Polyline Data The algorithm in this paper is used for polygon data because it is based on the number of neighboring polygons. However, only with some simple modifications, the method can also be applied to polyline data. The key idea is to change the topological relationship from the adjacency to the intersection when counting the number of neighboring features in polyline data. The main procedure of the modified algorithm is as follows: (1) Similar to NFNF and NSNF, the number of the first-order intersecting features (denoted by NFIF) and the number of the second-order intersecting features (denoted by NSIF) are counted for every polyline feature. (2) NFIF is quantified to the watermark bit, and NSIF is quantified to the watermark index. (3) According to the majority voting mechanism, a zero watermark is constructed by combining the watermark bits and the watermark indices. A demonstration of NFIF and NSIF of a polyline data is given below, as shown in Figure 18. The polyline data contains twelve polyline features rendered with different colors. If two features have one or more common points, they are regarded as the intersecting features. Figure 18a shows the statistics of NFIF values, and Figure 18b shows the statistics of NSIF values. The labels near the features are their corresponding NFIF values or NSIF values. For example, for polyline A in the upper left corner, its NFIF and NSIF are 1 and 3, respectively. Furthermore, unlike polyline data and polygon data, there is no similar intersecting or neighboring relationship for points in point data. Therefore, the proposed method is not suitable for point data. Converting points to polylines or polygons seems like a feasible approach to making it possible. For example, construct points to the Voronoi diagram. This will be further investigated in our future works. The Watermark Uniqueness The watermark uniqueness is one of the most important indexes in zero watermarking. It is determined by the characteristics of the data used to construct the watermark. A good zero watermarking algorithm needs to ensure that the watermarks constructed from different data are very different. To verify the watermark uniqueness of the proposed method, we selected six test data that are all administrative division maps, denoted Data 1-6, as shown in Figure 19. First, construct watermarks from the six test data using the proposed method. Then calculate NC values between the six watermarks and the watermark generated by the experimental data in Section 3.1, respectively. If the NC values are less than the threshold, it means that the method has good watermark uniqueness; otherwise, the method is not qualified. Finally, considering that the watermark length will affect the watermark uniqueness, we choose different watermark lengths for experiments: 32, 64, 128, and 256. Therefore, four groups of experimental results are produced, each with six NC values, as shown in Figure 20. It can be observed in Figure 20 that all the NC values are less than the threshold for the six test data under four different watermark lengths. And it roughly shows a trend that the longer the watermark length, the lower the NC value for the data. In detail, the maximum of the NC values is 0.71, and most of them are concentrated around 0.55 and 0.65, which are less than the threshold of 0.75. This proved that the proposed method meets the requirement of the watermark uniqueness. Conclusions In zero watermarking for vector geographic data, resisting attacks that cause significant distortion of the data is a challenging problem. The key is to find the characteristics It can be observed in Figure 20 that all the NC values are less than the threshold for the six test data under four different watermark lengths. And it roughly shows a trend that the longer the watermark length, the lower the NC value for the data. In detail, the maximum of the NC values is 0.71, and most of them are concentrated around 0.55 and 0.65, which are less than the threshold of 0.75. This proved that the proposed method meets the requirement of the watermark uniqueness. Conclusions In zero watermarking for vector geographic data, resisting attacks that cause significant distortion of the data is a challenging problem. The key is to find the characteristics that are not affected by data deformation. In this paper, two local characteristics are introduced: NFNF and NSNF, and they are quantified to the watermark bit and the watermark index, respectively. Among them, NSNF, the number of second-order neighboring features, is the first time introduced into the watermarking for vector geographic data in the state-of-the-art of watermarking research. Further, NFNS and NSNF make full use of the topological and statistical characteristics of the data, which are the foundation of the proposed method. Experimental results show that this method has good robustness and can completely resist attacks with significant distortion compared with other algorithms, such as non-uniform scaling, simplification, and projection transformation attacks. The proposal of this method is a new exploration in improving the robustness of zero watermarking for vector geographic data. Moreover, the combination of topological characteristics and statistical characteristics can provide some ideas for the future watermarking research. However, this method is not suitable for point data. Exploring the conversion methods of point data to polyline data and polygon data will be the focus of our future research.
6,374.2
2021-01-28T00:00:00.000
[ "Computer Science" ]
Linearized phase-modulated analog photonic link with the dispersion-induced power fading effect suppressed based on optical carrier band processing A linear phase-modulated photonic link with the dispersion-induced power fading effect suppressed based on optical carrier band (OCB) processing is proposed. By introducing a proper phase shift to the OCB, the third-order intermodulation distortion (IMD3) component of the signal transmitted over a length of fiber is effectively suppressed, while the fundamental component is shifted to be away from the notch point of the transmission response. The IMD3 and the dispersion-induced power fading effect are effectively suppressed simultaneously to realize a linear phase-modulated photonic link, and the simplicity is preserved. Theoretical analyses are taken and an experiment is carried out. Simultaneous suppression of IMD3 and dispersion-induced power fading effect is achieved. An improvement of larger than 10 dB in third-order spurious-free dynamic range (SFDR3) for both the RF frequency around the notch point and the peak point of the transmission response curve for a 20-km link is realized, as compared with the traditional phase-modulated photonic link without the OCB processing. © 2017 Optical Society of America OCIS codes: (060.5625) Radio frequency photonics; (060.2360) Fiber optics links and subsystems; (070.1170) Analog optical signal processing. References and links 1. J. Capmany and D. Novak, “Microwave photonics combines two worlds,” Nat. Photonics 1(6), 319–330 (2007). 2. J. Yao, “Microwave photonics,” J. Lightwave Technol. 27(3), 314–335 (2009). 3. D. Zhu, J. Chen, and S. Pan, “Multi-octave linearized analog photonic link based on a polarization-multiplexing dual-parallel Mach-Zehnder modulator,” Opt. Express 24(10), 11009–11016 (2016). 4. T. R. Clark and M. L. Dennis, “Coherent optical phase-modulation link,” IEEE Photonics Technol. Lett. 19(16), 1206–1208 (2007). 5. Y. Shen, B. Hraimel, X. Zhang, G. E. Cowan, K. Wu, and T. Liu, “A novel analog broadband RF predistortion circuit to linearize electro-absorption modulators in multiband OFDM radio-over-fiber systems,” IEEE Trans. Microw. Theory Tech. 58(11), 3327–3335 (2010). 6. A. Ferreira, T. Silveira, D. Fonseca, R. Ribeiro, and P. Monteiro, “Highly linear integrated optical transmitter for subcarrier multiplexed systems,” IEEE Photonics Technol. Lett. 21(7), 438–440 (2009). 7. S. Li, X. Zheng, H. Zhang, and B. Zhou, “Highly linear radio-over-fiber system incorporating a single-drive dualparallel Mach–Zehnder modulator,” IEEE Photonics Technol. Lett. 22(24), 1775–1777 (2010). 8. M. Huang, J. Fu, and S. Pan, “Linearized analog photonic links based on a dual-parallel polarization modulator,” Opt. Lett. 37(11), 1823–1825 (2012). 9. W. Li and J. Yao, “Dynamic range improvement of a microwave photonic link based on bi-directional use of a polarization modulator in a Sagnac loop,” Opt. Express 21(13), 15692–15697 (2013). 10. G. Zhang, S. Li, X. Zheng, H. Zhang, B. Zhou, and P. Xiang, “Dynamic range improvement strategy for MachZehnder modulators in microwave/millimeter-wave ROF links,” Opt. Express 20(15), 17214–17219 (2012). 11. Y. Cui, Y. Dai, F. Yin, J. Dai, K. Xu, J. Li, and J. Lin, “Intermodulation distortion suppression for intensitymodulated analog fiber-optic link incorporating optical carrier band processing,” Opt. Express 21(20), 23433– 23440 (2013). 12. P. Li, L. Yan, T. Zhou, W. Li, Z. Chen, W. Pan, and B. Luo, “Improvement of linearity in phase-modulated analog photonic link,” Opt. Lett. 38(14), 2391–2393 (2013). 13. J. Li, Y. C. Zhang, S. Yu, and W. Gu, “Optical Sideband Processing Approach for Highly Linear PhaseModulation/Direct-Detection Microwave Photonics Link,” IEEE Photonics J. 6(5), 1–10 (2014). Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10397 #287018 https://doi.org/10.1364/OE.25.010397 Journal © 2017 Received 20 Feb 2017; revised 13 Apr 2017; accepted 19 Apr 2017; published 26 Apr 2017 14. H. Chi, X. Zou, and J. Yao, “Analytical models for phase-modulation-based microwave photonic systems with phase modulation to intensity modulation conversion using a dispersive device,” J. Lightwave Technol. 27(5), 511–521 (2009). 15. J. Chen, D. Zhu, and S. L. Pan, “Linearized phase-modulated analog photonic link based on optical carrier band processing”, in WOCC 2016 (2016). 16. C. H. Cox, Analog optical links: theory and practice (Cambridge University Press, 2006), Chap. 6. 17. M. Chen, H. Yu, and J. Wang, “Silicon Photonics-Based Signal Processing for Microwave Photonic Frontend,” Silicon Photonics III (Springer, 2016), Chap. 11. Introduction Analog photonic links (APLs) have attracted wide attentions due to its possibility in both commercial and military applications with the advantages of low loss, wide working bandwidth, light weight and immunity to electromagnetic interference [1,2].The spur-free dynamic range (SFDR) is a significant performance indicator for an analog photonic link.The SFDR is usually restricted by nonlinear distortions, among which the third-order intermodulation distortion (IMD3) is the primary limit of the sub-octave analog photonic link, since it is close to the RF carrier and cannot be simply removed by filters [3].Many methods have been proposed to realize IMD3 suppression.Typical electrical methods include electrical predistortion or post processing [4,5], which is still limited by the electrical bottleneck.Photonic methods have been proposed to produce one complementary IMD3 to cancel the existing one by using a dual Mach-Zehnder modulator [6], a dual-parallel Mach-Zehnder modulator [7], or a polarization modulator [8,9], which introduce additional complexity.Recently photonic approaches using direct optical processing have been proposed to realize the IMD3 suppression [10,11].Furthermore, direct optical processing methods have also been proposed to realize linearized phase-modulated photonic link [12,13], since the phasemodulated analog photonic link is free of bias drifting problem and has the advantages of the linear phase modulation process compared with the intensity-modulated photonic link [14].However, extra optical bandpass filters are used in [12], which makes the system complicated and restricts the working range.In [13], appropriate phase shifts need to be imposed to both the optical carrier band (OCB) and the second-order sidebands to suppress IMD3.On the other hand, for the long-distance transmission applications, dispersion-induced power fading effect need to be solved.Few works have been done considering both the linearity and the dispersion-induced power fading effect in a photonic analog link.Recently, we have proposed a linear phase-modulated photonic link based on OCB processing, which solves the dispersion-induced power fading problem simultaneously [15].Considering the dispersion of the transmitting fibers, the IMD3 of the signal transmitted over a length of the photonic link is power faded while maintaining the fundamental components by processing the OCB.However, only some preliminary numerical simulation results were reported in [15], which is insufficient to understand the approach. In this paper, we perform a comprehensive theoretical and experimental study on the linear phase-modulated photonic link with the dispersion-induced power fading effect suppressed based on OCB processing.Theoretical analyses are taken and a proof of concept experiment is carried out.Simultaneous suppression of IMD3 and dispersion-induced power fading effect is achieved.The values of third-order spurious-free dynamic range (SFDR3) are larger than 102 dB•Hz 2/3 for the RF frequencies around both the notch point and the peak point of the transmission response curve for a 20-km link, indicating a more than 10-dB improvement in SFDR3 as compared with the traditional phase-modulated photonic link without the OCB processing. Principle Figure 1 shows the proposed linearized phased-modulated analog photonic link with the dispersion-induced power fading effect suppressed based on OCB processing, which consists of a laser diode (LD), a phased modulator (PM), an optical carrier band (OCB) processor, a length of single mode fiber (SMF) and a photodetector (PD).In order to characterize distortions of the analog photonic link, a common RF practice of two-tone signal analysis is taken [16].The optical carrier with frequency of ω 0 is introduced to the PM and modulated by a two-tone RF signal with the frequencies of ω 1 and ω 2 .The two-tone signal is given by V m (t) = V [cos(ω 1 t) + cos(ω 2 t)], where V is the amplitude of the two-tone signal.The modulated optical signal at the output of the PM is as follows ( ) where E 0 is the amplitude of the optical signal, m = πV / V π , V π is the half-wave voltage of the PM.It can be further expanded in terms of Bessel functions of the first kind as As shown in Fig. 1 (b), with the OCBP introducing a phase shift of φ to the optical carrier band, the optical field of the optical carrier band can be expressed as where J n = J n (m) (n = 0, ± 1).After transmitting over a SMF with a length of L, the fiber dispersion introduces phase shift of θ ω = βL, where β represents the propagation constant related to optical carrier frequency.Thus the optical field can be expressed as where J n = J n (m) (n = 0, ± 1, ± 2), and higher order components are ignored.By injecting the optical signal into the PD, the output electrical signal is where β' and β" represents the 1st and 2nd derivation of β, respectively.Thus the coefficients of the fundamental and the IMD3 components are as follows ( ) 6) can be derived as It can be seen that by setting the value of φ to be a proper value, I IMD3 = 0 while optimizing the value of I c can be realized.In this way, the IMD3 is efficiently suppressed while the dispersion-induced power fading effect is solved simultaneously.Thus a linear phasemodulated analog photonic link with the dispersion-induced power fading effect suppressed is realized based on OCB processing.In order to demonstrate the capability of simultaneous suppression of IMD3 and dispersion-induced power fading effect, two typical different working frequencies at 14 GHz and 18 GHz are chosen in our experiment.The transmission response of the PM based analog photonic link without introducing any phase shift to the OCB is shown in Fig. 2. As can be seen, the frequency of 14 GHz and 18 GHz are around the notch point and the peak point, respectively.In order to obtain the proper values of the phase shift φ to realize the PM based photonic link linearization, a simulation is taken to analyze the relationship between the fundamental to IMD3 ratio and the phase shift φ based on Eq. ( 7).The second order derivation of the propagation constant is set to be 20 ps 2 /km.As shown in Fig. 3, for different modulation indices m, a proper phase shift value exists to optimize the fundamental to IMD3 ratio, and furthermore, the proper phase shift values are almost the same for a wide modulation indices range.For the condition with RF working frequency at 18 GHz, the optimized phase shift values are shown to be around 130°or 345°, while for the condition with 14 GHz, the proper phase shift values are shown to be around 110°or 250°.A two-tone signal test is taken to obtain the link performances.By introducing a two-tone RF signal with frequencies of 14.095 and 14.105 GHz to the PM, IMD3 components at frequencies of 14.085 GHz and 14.115 GHz are presented.Figure 4 shows the measured electrical spectra of the output fundamental signal and their IMD3 for the 14-GHz working condition when the RF power is set to be 6 dBm.Without introducing any phase shift to the OCB, the fundamental to IMD3 ratio is 36 dB, as shown in Fig. 4 (b).By introducing a phase shift of 155°to the optical carrier band, the fundamental to IMD3 ratio is dramatically increased to 61.5 dB, showing a 25.5-dB improvement.The SFDR performances for the two conditions are also measured by varying the modulated RF signal power, as shown in Fig. 5.The noise floor is set to be −160 dBm/Hz.As can be seen, without introducing any phase shift to the OCB, the SFDR3 is 92.08 dB•Hz 2/3 .By using the proposed method and introducing a phase shift of 155°, the SFDR3 is 103.04 dB•Hz 2/3 .An improvement of 10.96 dB in SFDR3 is achieved.The transmission response is also measured as shown in Fig. 6.As can be seen, compared with the results in Fig. 2, the working band around 14 GHz has been shifted away from the notch point, which means that the dispersion-induced power fading effect is suppressed.Thus, the simultaneous suppression of IMD3 and the dispersion-induced power fading effect is successfully realized.By tuning the working frequency band to be around 18 GHz, the measured electrical spectra of the output fundamental signal and their IMD3 are shown in Fig. 7.With a two-tone RF signal with frequencies of 17.995 GHz and 18.005 GHz modulated, IMD3 components at 17.985 GHz and 18.015 GHz are presented.It can be seen that the fundamental to IMD3 ratio has an improvement of 29.41 dB by introducing a phase shift of 135°to the OCB.The measured SFDR performances are shown in Fig. 8.The SFDR3 with and without introducing a phase shift of 155°to the OCB has a value of 91.1 dB•Hz 2/3 and 102.25 dB•Hz 2/3 , respectively, showing a 11.15-dB improvement.Thus the IMD3 is effectively suppressed and a linear phase-modulated photonic link is realized.Figure 9 shows the transmission response.Compared with the results shown in Fig. 2, it can be seen that the working band around 18 GHz is kept to be around the peak point.In our experiments, the proper phase shift value for the 18-GHz working condition agrees well with the theoretically simulated result, while the experimental phase shift value for the 14-GHz working condition has deviation with the corresponding simulated value.This is due to the processing deviation of the optical processor in actual experiments. Experimental results and discussions By using the proposed method, a linear phase-modulated analog photonic link with simultaneous suppression of dispersion-induced power fading effect is realized based on OCB processing.According to Eq. ( 7), it can be seen that the proper value of φ applied to the OCB is dependent on the transmission length L of the link.Thus the optical carrier band processing should be done according to the transmission length of the link.On the other hand, the system performance can be further improved if the optical processor can realize a higher processing precision.By using an integrated optical processor [17], the simplicity and the performance of the proposed analog photonic link will be further improved. Conclusion A linear phase-modulated photonic link with the dispersion-induced power fading effect suppressed based on optical carrier band processing is proposed and demonstrated.By simply introducing a phase shift to the OCB, the IMD3 and the dispersion-induced power fading effect are effectively suppressed simultaneously.Analytical model is established, and simultaneous suppression of IMD3 and the dispersion-induced power fading effect is experimentally demonstrated.The performance is improved by larger than 10 dB in SFDR3 as compared with the traditional phase-modulated photonic link without processing of the OCB.This approach can be applied for long-distance transmission with linearity in both commercial and military systems. Fig. 1 . Fig. 1.(a) Schematic diagram and (b) operation principle of the proposed phase-modulated analog photonic link based on optical carrier band processing.LD: laser diode, PM: phase modulator, OCBP: optical carrier band processor, SMF: single mode fiber, PD: photodetector, OCB: optical carrier band. Fig. 2 . Fig. 2. The experimental transmission response of the PM based analog photonic link without introducing any phase shift to the optical carrier band.An experiment based on the setup shown in Fig. 1 is carried out.The wavelength of the laser source (Teraxion, PS-NLL-1550.12-8004) is set to be 1550.12nm.The phase modulator (PM, Eospace PM-DV5-40-PFU-PFU-LV) has a half voltage of 4.0 V and a 3-dB bandwidth of 30 GHz.The two-tone RF signals are generated by a microwave signal generator (Agilent E8267D).The OCBP is realized by using a commercial waveshaper (Finisar 4000s).The SMF for transmission has a length of 20 km.The PD (u 2 t, XPDV2120R) has a bandwidth of 50 GHz and a responsivity of 0.65 A/W.A vector network analyzer (VNA, R&S ZVA67,10 Fig. 3 . Fig. 3.The simulated fundamental to IMD3 ratio values versus the phase shift φ introducing to the optical carrier band for different modulation indices with the RF working frequency around (a) 18 GHz and (b) 14 GHz. Fig. 4 . Fig. 4. Experimental electrical spectra of the output fundamental signal and their IMD3 for the 14-GHz working condition (a) without (b) with introducing a phase shift of 155°to the optical carrier band. Fig. 5 .Fig. 6 . Fig. 5. Experimental SFDR performance of the PM based analog photonic link (a) without (b) with introducing a phase shift of 155°to the optical carrier band for the 14-GHz working condition. Fig. 7 . Fig. 7. Experimental electrical spectra of the output fundamental signal and their IMD3 for the 18-GHz working condition (a) without (b) with introducing a phase shift of 135°to the optical carrier band. Fig. 8 . Fig. 8. Experimental SFDR performance of the PM based analog photonic link (a) without (b) with introducing a phase shift of 135°to the optical carrier band for the 18-GHz working condition. Fig. 9 . Fig. 9.The experimental transmission response of the PM based analog photonic link with introducing a phase shift of 135°to the optical carrier band.
3,869.6
2017-05-01T00:00:00.000
[ "Physics" ]
Study on the Technology and Properties of Green Laser Sintering Nano-Copper Paste Ink With the rapid development of integrated circuits, glass substrates are frequently utilized for prototyping various functional electronic circuits due to their superior stability, transparency, and signal integrity. In this experiment, copper wire was printed on a glass substrate using inkjet printing, and the electronic circuit was sintered through laser irradiation with a 532 nm continuous green laser. The relationship between resistivity and microstructure was analyzed after laser sintering at different intensities, scanning speeds, and iterations. The experimental results indicate that the conductivity of the sintered lines initially increases and then decreases with an increase in laser power and scanning speed. At the same power level, multiple sintering runs at a lower scanning speed pose a risk of increased porosity leading to reduced conductivity. Conversely, when the scanning speed exceeds the optimal sintering speed, multiple sintering runs have minimal impact on porosity and conductivity without altering the power. Introduction In recent years, the rapid development of the electronics industry has led to a continuous increase in the integration of electronic devices.Electronic components are evolving towards precision, intelligence, and low cost [1].These components are widely utilized in flexible wearables, wireless communications, displays, microsensors, and other devices.Glass, as the electrode substrate, offers improved stability, transmission, and signal integrity.It is currently extensively used in electronic components and can also be applied to optical devices, 5G wireless communication, microfluidic chips, display devices, and other fields [2][3][4].Therefore, there is growing interest in researching electrode preparation on its surface.Common methods include vapor deposition, lithography, and sputtering [5][6][7].However, these processes involve complex preliminary procedures and high equipment costs.In contrast, inkjet-printed circuits have gained attention over the past decade due to their numerous advantages in terms of process and manufacturing costs compared to other deposition techniques. When fabricating circuits, the use of suitable conductive metal inks is crucial [8].Noble metals such as silver and gold are commonly employed as functional particles in conductive inks due to their high conductivity and stability in air [9][10][11].However, these metals are prohibitively expensive for large-scale applications.Copper presents an appealing alternative due to its relatively high bulk conductivity and lower cost compared to precious metals [12].The heat treatment process plays a critical role in the electrical conductivity of printing ink and the prevention of damage to substrate materials.Traditional heat treatment methods include oven-hot sintering, electric sintering, microwave sintering, and so on [13,14].While hot sintering is slow and can cause damage to the substrate, it is not suitable for low-melting point substrate materials.Although electro-sintering is selective and effective, it is a continuous and slow process with low efficiency.Microwave sintering offers high selectivity but requires complex heating equipment with special design at a high cost, thus making it more suitable for preparing oxides.In recent years, laser sintering has become an important research focus for curing nano-conductive inks due to its small sintering point, high strength of sintered material, fast heating and cooling speed, as well as narrow heat-affected zone [15][16][17]. This study investigates the feasibility of using laser-sintered nanometer copper paste to form electronic circuits by utilizing a 532 nm continuous green laser to sinter 40 um-wide nanometer copper ink printed on a glass substrate. Materials In this paper, nano-copper paste is prepared by mixing nano-copper powder with organic solvent and additive, in which the mass fraction of copper powder is 40 wt.%.Add polyvinylpyrrolidone (PVP) solid powder into the beaker, add the appropriate amount of anhydrous ethanol into the beater, and stir it on a magnetic mixer for 3 min to dissolve it fully.The dissolved solution and nano-copper powder are mixed into the centrifugal tube according to the mass ratio of 6:4, and the mixed slurry is placed in the ultrasonic cleaning tank for ultrasonic dispersion for 30 min so that the nano-copper powder is evenly dispersed in the solution, that is, the evenly mixed nano-copper slurry is obtained.In this experiment, the diameter of nano-copper particles is 60-100 nm (average particle size 80 nm), and the average melting point temperature given by the manufacturer is 367 • C. The viscosity and surface tension of nano-copper paste are two important physical properties that have an important effect on the application performance of nano-copper paste.As for the viscosity of the nano-copper paste, 3 M tape was pasted on the surface of the copper film, and the tape was quickly pulled apart after compression, as shown in Figure 1a.The adhesion of the nano-copper paste to the substrate was detected by observing the spalling of the coating in the grid.After a 3 M tape test, the spalling area of the sample was 0.358%, and the adhesion between the copper paste and the substrate reached 4B standard and met the application requirements.We used the contact angle to represent the surface tension of the nano-copper paste.The measurement results showed that, as shown in Figure 1b,c, the contact angle between the nano-copper paste and the substrate was 59 • when PVP was not added; after PVP was added, the contact angle between the nano-copper paste and the substrate was 44 • , the contact area between the nano-copper paste and the plane increased, and the contact angle decreased.Nano-copper paste can better wet the substrate.not suitable for low-melting point substrate materials.Although electro-sintering is selective and effective, it is a continuous and slow process with low efficiency.Microwave sintering offers high selectivity but requires complex heating equipment with special design at a high cost, thus making it more suitable for preparing oxides.In recent years, laser sintering has become an important research focus for curing nano-conductive inks due to its small sintering point, high strength of sintered material, fast heating and cooling speed, as well as narrow heat-affected zone [15][16][17].This study investigates the feasibility of using laser-sintered nanometer copper paste to form electronic circuits by utilizing a 532 nm continuous green laser to sinter 40 umwide nanometer copper ink printed on a glass substrate. Materials In this paper, nano-copper paste is prepared by mixing nano-copper powder with organic solvent and additive, in which the mass fraction of copper powder is 40 wt.%.Add polyvinylpyrrolidone (PVP) solid powder into the beaker, add the appropriate amount of anhydrous ethanol into the beater, and stir it on a magnetic mixer for 3 min to dissolve it fully.The dissolved solution and nano-copper powder are mixed into the centrifugal tube according to the mass ratio of 6:4, and the mixed slurry is placed in the ultrasonic cleaning tank for ultrasonic dispersion for 30 min so that the nano-copper powder is evenly dispersed in the solution, that is, the evenly mixed nano-copper slurry is obtained.In this experiment, the diameter of nano-copper particles is 60-100 nm (average particle size 80 nm), and the average melting point temperature given by the manufacturer is 367 °C. The viscosity and surface tension of nano-copper paste are two important physical properties that have an important effect on the application performance of nano-copper paste.As for the viscosity of the nano-copper paste, 3 M tape was pasted on the surface of the copper film, and the tape was quickly pulled apart after compression, as shown in Figure 1a.The adhesion of the nano-copper paste to the substrate was detected by observing the spalling of the coating in the grid.After a 3 M tape test, the spalling area of the sample was 0.358%, and the adhesion between the copper paste and the substrate reached 4B standard and met the application requirements.We used the contact angle to represent the surface tension of the nano-copper paste.The measurement results showed that, as shown in Figure 1b,c, the contact angle between the nano-copper paste and the substrate was 59° when PVP was not added; after PVP was added, the contact angle between the nano-copper paste and the substrate was 44°, the contact area between the nano-copper paste and the plane increased, and the contact angle decreased.Nano-copper paste can better wet the substrate. Preparation of Samples Inkjet printing was used to continuously deposit copper nano-ink onto a glass substrate, resulting in the formation of copper paste lines measuring 2 cm × 250 µm × 40 µm.Subsequently, a three-dimensional digital microscope was employed to perform contour scanning of the printed ink lines, and the scanning image is depicted in Figure 2. As a printing ink, copper nanoparticles should have good dispersion in the solvent, and their adhesion should meet the 4B standard.JETLAB 4, a nanomaterial deposition inkjet printing system from MICROFAB Technologies, Plano, TX, USA, was used. Nanomaterials 2024, 14, x FOR PEER REVIEW 3 of 15 Inkjet printing was used to continuously deposit copper nano-ink onto a glass substrate, resulting in the formation of copper paste lines measuring 2 cm × 250 µm × 40 µm.Subsequently, a three-dimensional digital microscope was employed to perform contour scanning of the printed ink lines, and the scanning image is depicted in Figure 2. As a printing ink, copper nanoparticles should have good dispersion in the solvent, and their adhesion should meet the 4B standard.JETLAB 4, a nanomaterial deposition inkjet printing system from MICROFAB Technologies, Plano, TX, USA, was used. Experimental Method In this study, the sintering of nano-copper paste is carried out in the air.In order to prevent the oxidation of nano-copper, PVP is added to the nano-copper paste.PVP can not only be used as a dispersant but also be coated on nano-copper particles to prevent the oxidation of nano-copper particles.These decomposition products have a certain reducibility in sintering and can be nano-copper oxides such as cuprous oxide and copper oxide reduced to copper.During the experiment, the prepared sample was installed on the X-Y translation platform, a 532 nm continuous green laser with the maximum output power of 15 W was used as the light source, the CCD camera was connected to the computer, and the protection gas was turned on to observe the sintering process of the laser in the experiment.Different laser parameters (laser power density, scanning speed, scanning times) were used to carry out the sintering experiments, and the effects of laser power, scanning speed, and sintering times on the porosity and conductivity of sintering lines were compared.The schematic diagram of the experimental device is shown in Figure 3.After focusing, the spot diameter is 50 µm, and the laser scanning speed is controlled by the X-Y mobile station. Experimental Method In this study, the sintering of nano-copper paste is carried out in the air.In order to prevent the oxidation of nano-copper, PVP is added to the nano-copper paste.PVP can not only be used as a dispersant but also be coated on nano-copper particles to prevent the oxidation of nano-copper particles.These decomposition products have a certain reducibility in sintering and can be nano-copper oxides such as cuprous oxide and copper oxide reduced to copper.During the experiment, the prepared sample was installed on the X-Y translation platform, a 532 nm continuous green laser with the maximum output power of 15 W was used as the light source, the CCD camera was connected to the computer, and the protection gas was turned on to observe the sintering process of the laser in the experiment.Different laser parameters (laser power density, scanning speed, scanning times) were used to carry out the sintering experiments, and the effects of laser power, scanning speed, and sintering times on the porosity and conductivity of sintering lines were compared.The schematic diagram of the experimental device is shown in Figure 3.After focusing, the spot diameter is 50 µm, and the laser scanning speed is controlled by the X-Y mobile station. Surface Characterization Surface morphology analysis of the copper wire after laser sintering was conducted using a scanning electron microscope (ZEISS EVO25, Jena, Germany).The main parameters of SEM selection were EHT = 20 KV, WD = 8.00 mm, Mag = 1.92 KX, and Signal A = SE1. Surface Characterization Surface morphology analysis of the copper wire after laser sintering was conducted using a scanning electron microscope (ZEISS EVO25, Jena, Germany).The main parameters of SEM selection were EHT = 20 KV, WD = 8.00 mm, Mag = 1.92 KX, and Signal A = SE1. Three-dimensional topography (VHX-7000) measurement of inkjet-printed nanocopper wires was performed using a 3D digital microscope to inspect the width and thickness of the printed lines. The XRD test was carried out with an X-ray diffractometer (D8 DISCOVER PLUS, Karlsruhe, Germany/BRUKER) to analyze the phase of the sintered line surface. The resistance of the copper wire was measured using a digital multimeter and a four-point probe measurement device.The electrical conductivity, σ, was calculated by taking into account the thickness and length of the continuous line, as shown in the following equation: where R represents the measured resistance, L denotes the length of the measured line, and S indicates the cross-sectional area of the copper wire. The porosity of SEM images of laser-sintered lines was measured using ImageJ software.The specific operations are as follows: The Set Scale function in ImageJ software is utilized to measure and calibrate the scale on the scanning electron microscope image.Subsequently, the RGB image is converted to a 32-bit image for easier differentiation of color variations.A rectangular box is then used to select the range for calculation, and the threshold function is employed to adjust the threshold value of the box selection area until all pores in the figure are marked out.The calculated pore area is denoted as S1, while the area of the entire box selection area is denoted as S. The porosity P can then be obtained using the following formula: The coated sample was placed on an X-Y mobile platform and exposed to the light spot of a 532 nm continuous green laser.The platform's moving speed was set to 50 mm/s, ensuring that it moved only once.After each sintering process was completed, the laser power was adjusted to sinter the sample at different power levels of 0.5 W, 1 W, 1.5 W, 2 W, 2.5 W, and 3 W, respectively.The microscopic morphologies of the sintered lines under different laser powers are shown in Figure 4.In Figure 4a, the sintered surface morphology when the laser power was at 0.5 W is displayed.After laser irradiation, the nanoparticles on the copper paste layer surface are tightly arranged with smaller particles condensing around larger particles due to the low sintering temperature at this power level.At a laser power of 0.5 W, polymer additives are removed and nanoparticles (NPs) are in close contact with fine black regions at interfaces representing pores formed by particle arrangement during sintering.When the laser power is increased to 1 W, as shown in Figure 4b, the particles are arranged more closely and sintering necks begin to form between smaller-diameter copper nanoparticles due to greater driving force for sintering provided by an increase in laser power.Although sintering occurs at this laser power, the results are suboptimal, as many pores are still present.As the power increases to 1.5 W and 2 W, the pores at the interfaces gradually decrease and disappear.Plastically deformed copper nanoparticles fill the voids, and sintering necks form continuous veins that intertwine to create a networklike interconnected structure.This is attributed to the increase in sintering line surface temperature with laser power, which promotes Oswald ripening and further growth of the necks formed between nanoparticles, enhancing densification (see Figure 4c,d).When the laser power is further increased to 2.5 W and 3 W, the copper nanoparticles have sintered into blocks that are fully fused together, rendering the original nanoparticles no longer visible.The resulting surface exhibits a loose network structure with large pores, indicating decreased densification.At this point, it is possible that the sintering temperature may be far above the melting point of copper nanoparticles.In comparison to earlier experimental figures, it is evident that rapidly fused and agglomerated copper nanoparticles form much larger blocks within a short time (as shown in Figure 4e,f). Results and Discussion provided by an increase in laser power.Although sintering occurs at this laser power, the results are suboptimal, as many pores are still present.As the power increases to 1.5 W and 2 W, the pores at the interfaces gradually decrease and disappear.Plastically deformed copper nanoparticles fill the voids, and sintering necks form continuous veins that intertwine to create a network-like interconnected structure.This is attributed to the increase in sintering line surface temperature with laser power, which promotes Oswald ripening and further growth of the necks formed between nanoparticles, enhancing densification (see Figure 4c,d).When the laser power is further increased to 2.5 W and 3 W, the copper nanoparticles have sintered into blocks that are fully fused together, rendering the original nanoparticles no longer visible.The resulting surface exhibits a loose network structure with large pores, indicating decreased densification.At this point, it is possible that the sintering temperature may be far above the melting point of copper nanoparticles.In comparison to earlier experimental figures, it is evident that rapidly fused and agglomerated copper nanoparticles form much larger blocks within a short time (as shown in Figure 4e,f).The microscopic morphology of sintered lines is a critical factor influencing their electrical conductivity [18].In order to further illustrate the relationship between the sintered surface morphology of copper paste and its electrical properties, ImageJ software was used to process the SEM images of the sintered samples.Based on the differences in brightness and darkness within the images, pores were filled with red pixels.The processed images are shown in Figure 5.The microscopic morphology of sintered lines is a critical factor influencing their electrical conductivity [18].In order to further illustrate the relationship between the sintered surface morphology of copper paste and its electrical properties, ImageJ software was used to process the SEM images of the sintered samples.Based on the differences in brightness and darkness within the images, pores were filled with red pixels.The processed images are shown in Figure 5.Using ImageJ1.54jsoftware, the porosity of the processed images was calculated to determine the area percentage of pores in each image.The resistivity values of the sintered lines under different laser powers were measured at room temperature using the fourpoint probe method, with three measurements taken for each sample and averaged.Subsequently, conductivity was calculated using Equation (1).The relationships between po- Using ImageJ1.54jsoftware, the porosity of the processed images was calculated to determine the area percentage of pores in each image.The resistivity values of the sintered lines under different laser powers were measured at room temperature using the four-point probe method, with three measurements taken for each sample and averaged.Subsequently, conductivity was calculated using Equation (1).The relationships between porosity, conductivity, and laser power are illustrated in Figure 6.Using ImageJ1.54jsoftware, the porosity of the processed images was calculated to determine the area percentage of pores in each image.The resistivity values of the sintered lines under different laser powers were measured at room temperature using the fourpoint probe method, with three measurements taken for each sample and averaged.Subsequently, conductivity was calculated using Equation (1).The relationships between porosity, conductivity, and laser power are illustrated in Figure 6. Figure 6 illustrates the relationship between the porosity and conductivity of nanocopper paste sintered lines with varying laser powers at a fixed laser scanning speed of 50 mm/s and at ambient temperature.An analysis of the data in the figure reveals that the conductivity of the sintered lines shows a trend of initially increasing and then decreasing with an increase in laser power.Microscopically analyzing the SEM images, this trend can be attributed to the fact that when the laser power is low, the sintering temperature is insufficient to drive the interconnection of nanoparticles, leading to a lack of direct contact between conductive phase particles.As a result, there are nanoor even micro-scale distances between particles, making it difficult to form conductive pathways.In this scenario, the conductivity primarily relies on the tunnel effect where only a small fraction of charged particles can traverse barriers while the rest are reflected, resulting in low conductivity.As power increases, a noticeable neck growth phenomenon is observed between copper nanoparticles with necks filling inter-particle pores establishing a percolation network for electron flow [19].When the laser power reaches 2 W, the conductivity of the sintered line peaks at 3.46 × 10 6 S/m, and the porosity on the sintered line surface reaches its minimum value of 11.15%, as shown in Figure 6.However, when the laser power further increases, as evident in Figure 5, the originally fine and uniform red regions gradually transform into large, randomly distributed red areas, leading to an increase in porosity on the sintered surface and a subsequent decrease in conductivity.This is because when the laser power exceeds a certain threshold, the surface temperature of the copper paste becomes excessively high, imparting more energy to the nanoparticles and intensifying the sintering process.This results in coarsening and fusion of particles, increasing their size and forming visibly grown pores.These large pores reduce film density and disrupt conductive channels within the material, leading to an increase in the resistivity of sintered lines. Effect of Scanning Speed on Sintered Circuit Lines Laser sintering is a rapid heating process, where the sintering speed determines the sintering time, which in turn affects the amount of laser energy absorbed by the material.Therefore, the laser scanning speed is also a critical parameter influencing sintering quality.Under the conditions of a fixed laser power of 2 W and a single sintering pass, experiments were conducted by adjusting the laser scanning speed to 10 mm/s, 20 mm/s, 50 mm/s, 100 mm/s, 150 mm/s, and 200 mm/s.The surface micro topographies of sintered lines at different laser scanning speeds are shown in Figure 7.It can be seen from the SEM diagram that when the nano-copper paste is sintered at the speed of 10 mm/s, the copper nanoparticles on the surface of the sintering line have been sintered into blocks and completely sintered together.This is because when the laser scanning speed is slow, the sintering time is longer under the same laser power, and the sintering temperature is higher with the accumulation of heat.The copper nanoparticles melt and combine into larger particles, and form uneven gaps after cooling and solidification.As shown in Figure 8, the oxidation peak appears at this time, which may be due to the thermal decomposition of PVP covered by the surface of copper nanoparticles during the sintering process, and the exposure of copper nanoparticles to air leads to a slow sintering speed and longer exposure to air, resulting in more copper oxide; however, alcohol and acid after PVP decomposition are not enough to completely reduce the generated copper oxide to copper.The oxidation peak disappears when the speed increases, and this may be because, during the sintering process, PVP coated on the surface of copper nanoparticles encounters thermal decomposition, exposing copper nanoparticles to air, increasing the sintering speed, shortening the exposure time of copper nanoparticles to air, and reducing the generated copper oxide.The alcohol and acid after PVP decomposition will completely reduce the copper oxide to copper.When the scanning speed increases to 50 mm/s, the intensity of sintering caused by heat accumulation decreases, leading to an improvement in sintering quality.The removal of organic material allows plastically deformed nano-copper to fill the gaps, resulting in a more densely sintered surface, as shown in Figure 7c. As the speed further increases to 100 mm/s and 150 mm/s, the heating induced by the laser promotes the formation of necks, and a network-like structure begins to spread.However, despite these spreading effects dominating, a small portion of larger-diameter particles do not fully form sintering necks, as illustrated in Figure 7d,e. When the speed reaches 200 mm/s, large particles are formed based on the migration and agglomeration of smaller particles.Figure 7f demonstrates particle growth occurring with coarsened particles covering the sintered line surface.There is no direct neck for- When the scanning speed increases to 50 mm/s, the intensity of sintering caused by heat accumulation decreases, leading to an improvement in sintering quality.The removal of organic material allows plastically deformed nano-copper to fill the gaps, resulting in a more densely sintered surface, as shown in Figure 7c. particles do not fully form sintering necks, as illustrated in Figure 7d,e. When the speed reaches 200 mm/s, large particles are formed based on the migration and agglomeration of smaller particles.Figure 7f demonstrates particle growth occurring with coarsened particles covering the sintered line surface.There is no direct neck formation and densification from one particle's surface to another; instead, coverage primarily consists of large particles.As the speed further increases to 100 mm/s and 150 mm/s, the heating induced by the laser promotes the formation of necks, and a network-like structure begins to spread.However, despite these spreading effects dominating, a small portion of larger-diameter particles do not fully form sintering necks, as illustrated in Figure 7d,e. When the speed reaches 200 mm/s, large particles are formed based on the migration and agglomeration of smaller particles.Figure 7f demonstrates particle growth occurring with coarsened particles covering the sintered line surface.There is no direct neck formation and densification from one particle's surface to another; instead, coverage primarily consists of large particles. The experiment also investigated the impact of laser scanning speed on the surface density and electrical properties of copper paste sintering.ImageJ software was employed to analyze the porosity of the sintered SEM images based on differences in brightness and darkness.Additionally, the electrical conductivity of the sintered lines at different scanning speeds was measured and calculated using the four-point probe method.Figure 9 illustrates the variations in porosity and electrical conductivity of the sintered lines with laser scanning speed, while keeping the laser power fixed at 2 W. It was observed that as scanning speed increased, the porosity of the sintered lines generally decreased, whereas there was a trend of an initially increasing and then decreasing relationship between electrical conductivity and scanning speed. Upon analyzing the data in conjunction with the SEM images, it is evident that lower laser scanning speeds lead to a longer sintering time and intense sintering due to heat accumulation.This results in significant porosity within the copper film, disrupting the formation of conductive pathways and significantly impacting its electrical conductivity.Similar to the experimental results regarding power magnitude discussed in the previous section.When the scanning speed increases to 50 mm/s, it can be seen from Figure 9 that under this parameter, the porosity of the sintering line decreases the most and the densification degree is high.In addition to a few independent large-diameter copper nanoparticles, sheets of mesh interconnecting structures spread out and occupy a dominant position, the formation of these necks and grain growth form a direct conductive path, the contact effect becomes the main factor affecting the conductivity, and the more electronic paths, the better the conductivity. ning speeds was measured and calculated using the four-point probe method.Figure 9 illustrates the variations in porosity and electrical conductivity of the sintered lines with laser scanning speed, while keeping the laser power fixed at 2 W. It was observed that as scanning speed increased, the porosity of the sintered lines generally decreased, whereas there was a trend of an initially increasing and then decreasing relationship between electrical conductivity and scanning speed.Upon analyzing the data in conjunction with the SEM images, it is evident that lower laser scanning speeds lead to a longer sintering time and intense sintering due to heat accumulation.This results in significant porosity within the copper film, disrupting the formation of conductive pathways and significantly impacting its electrical conductivity.Similar to the experimental results regarding power magnitude discussed in the previous section.When the scanning speed increases to 50 mm/s, it can be seen from Figure 9 that under this parameter, the porosity of the sintering line decreases the most and the densification degree is high.In addition to a few independent large-diameter copper nanoparticles, sheets of mesh interconnecting structures spread out and occupy a dominant position, the formation of these necks and grain growth form a direct conductive path, the contact effect becomes the main factor affecting the conductivity, and the more electronic paths, the better the conductivity. As the scanning speed continues to increase, there is a slight decrease in the porosity of the sintered line surface, which remains relatively stable.However, the electrical performance of the sintered line decreases.Analysis of the SEM images of sintering reveals that excessively high scanning speeds reduce heat accumulation on the sintered line surface, leading to insufficient sintering and a decrease in electron paths.The inhibition of porosity increases due to the coverage of under-sintered particles, which is also observed.In particular, when the scanning speed increases to 200 mm/s, neck-like structures formed between small particles mainly exist in the form of points, and the surface is As the scanning speed continues to increase, there is a slight decrease in the porosity of the sintered line surface, which remains relatively stable.However, the electrical performance of the sintered line decreases.Analysis of the SEM images of sintering reveals that excessively high scanning speeds reduce heat accumulation on the sintered line surface, leading to insufficient sintering and a decrease in electron paths.The inhibition of porosity increases due to the coverage of under-sintered particles, which is also observed.In particular, when the scanning speed increases to 200 mm/s, neck-like structures formed between small particles mainly exist in the form of points, and the surface is predominantly covered by larger nanoparticles with no obvious densification trend.Consequently, there is a decrease in electrical conductivity. Influence of Sintering Times on Sintering Lines Due to the rapid heating and cooling process of laser sintering, a single sintering may not achieve the desired densification effect, especially with low laser energy density.Therefore, further study and analysis are needed to determine whether multiple sintering can improve the quality of sintering.This section investigates the impact of laser sintering times on the surface morphology and electrical conductivity of the sintered line.The surface morphology and electrical conductivity after multiple sintering are studied and analyzed under fixed laser power at 2 W with varying scanning speeds at 20 mm/s, 50 mm/s, 100 mm/s, and 150 mm/s. The microstructure of the line after multiple sintering at different scanning speeds is shown in Figure 10.It can be observed from the microscopic topography that when the scanning speed is 20 mm/s, the surface topography formed on the sintering line becomes coarser with an increase in laser scanning times.This effect is especially pronounced when the laser sintering is continuous for four times, leading to significantly larger pores on the sintering surface compared to the first time.It is evident that at low scanning speeds, heat accumulation resulting from multiple scans promotes pore enlargement.When the speed is 50 mm/s, the laser-induced heating behavior drives the formation of the neck when the repetition is one to two times, and the sheet mesh structure begins to diffuse, which dominates the diffusion and increases the degree of densification.When the number of scans is increased by three to four times, the heat obtained by the nanoparticles accumulates, making the particles further melt and solidify.When the speed increases to 100 mm/s and 150 mm/s, after a single laser scan, necks form between small copper nanoparticles and become interconnected.However, many un-sintered particles remain on both the surface of the copper paste layer and within the film.With an increase in scanning times, although the network structure spreads out, an under-sintering phenomenon does not significantly improve, and densification remains inadequate.The relationship between the porosity and conductivity of the sintering line and the sintering times at different speeds is illustrated in Figure 11.At a velocity of 20 mm/s, the porosity of the sintered surface increases from 14.61% for primary sintering to 18.42% for four sintering, accompanied by a slight decline in conductivity.Analysis of the corresponding SEM diagram in Figure 10 reveals that at low scanning speeds, heat accumulates with an increase in scanning times, causing further melting and re-solidification of originally formed particles, resulting in thick lines.The increase in particle size leads to increased porosity, with large pores disrupting the conductive channel of the material and leading to increased resistance.When the speed is 50 mm/s, the laser-induced heating behavior drives the formation of the neck at one to two repetitions, and the sheet mesh structure begins to diffuse, which dominates the diffusion and increases the degree of densification.At three to four repetitions, with the increase of scanning times, the nanoparticles gain heat and accumulate, making the particles further melt and solidify.The coarsening growth of the particles leads to an increase in porosity, which hinders the electronic transition and thus decreases the conductivity, resulting in an increase and then a decrease in the conductivity after two repetitions.When the speed is further increased to 100 mm/s, it can be observed from the Figure 11b data that at this point, the porosity of the sintered surface stabilizes at around 10%, with multiple scanning not significantly altering its value.Electrical conductivity fluctuates within a small range as scanning times increase; this may be attributed to stable porosity on the sintered surface due to high scanning The relationship between the porosity and conductivity of the sintering line and the sintering times at different speeds is illustrated in Figure 11.At a velocity of 20 mm/s, the porosity of the sintered surface increases from 14.61% for primary sintering to 18.42% for four sintering, accompanied by a slight decline in conductivity.Analysis of the corresponding SEM diagram in Figure 10 reveals that at low scanning speeds, heat accumulates with an increase in scanning times, causing further melting and re-solidification of originally formed particles, resulting in thick lines.The increase in particle size leads to increased porosity, with large pores disrupting the conductive channel of the material and leading to increased resistance.When the speed is 50 mm/s, the laser-induced heating behavior drives the formation of the neck at one to two repetitions, and the sheet mesh structure begins to diffuse, which dominates the diffusion and increases the degree of densification.At three to four repetitions, with the increase of scanning times, the nanoparticles gain heat and accumulate, making the particles further melt and solidify.The coarsening growth of the particles leads to an increase in porosity, which hinders the electronic transition and thus decreases the conductivity, resulting in an increase and then a decrease in the conductivity after two repetitions.When the speed is further increased to 100 mm/s, it can be observed from the Figure 11b data that at this point, the porosity of the sintered surface stabilizes at around 10%, with multiple scanning not significantly altering its value. Electrical conductivity fluctuates within a small range as scanning times increase; this may be attributed to stable porosity on the sintered surface due to high scanning speed reducing energy accumulation impact.When the speed is 150 mm/s, although the porosity of the sintering line increases when the laser scanning is repeated two to three times, the conductivity of the sintering line will slightly increase.The analysis of the corresponding microscopic morphology in Figure 10 shows that the surface porosity of the sintering line increases after multiple sintering at a high scanning speed.However, the heat accumulation caused by multiple sintering causes the neck interconnecting structure between small nanoparticles to form, increasing the conductive channel.Therefore, under constant laser power outputting conditions when laser scanning speed is low (resulting in energy accumulation caused by multiple scans), there will be a minor increase in the surface porosity of the sintering line.As the scanning speed increases, however, the influence on sintering results decreases due to reduced impact from energy accumulation during multiple scans.As shown in Figure 11b, with the increase of laser scanning repeats at different scanning speeds, heat accumulates on the sintered surface, the previously formed particles are further melted and resolidified, and the coarsening growth of particles reduces the densification degree.Therefore, the porosity of the sintering line increases with the increase of laser sintering repetition times. Nanomaterials 2024, 14, x FOR PEER REVIEW 12 of 15 constant laser power outputting conditions when laser scanning speed is low (resulting in energy accumulation caused by multiple scans), there will be a minor increase in the surface porosity of the sintering line.As the scanning speed increases, however, the influence on sintering results decreases due to reduced impact from energy accumulation during multiple scans.As shown in Figure 11b, with the increase of laser scanning repeats at different scanning speeds, heat accumulates on the sintered surface, the previously formed particles are further melted and resolidified, and the coarsening growth of particles reduces the densification degree.Therefore, the porosity of the sintering line increases with the increase of laser sintering repetition times. Evolution Process of the Microstructure during Sintering of Nano-Copper Paste Based on the analysis of the surface morphology of the sintered lines under various laser process parameters as outlined in Section 3.1, Figure 12 illustrates the evolution process of the microstructure during laser sintering of nano-copper paste.The sintering mechanism of nano-copper paste can be elucidated through the four steps depicted in Figure 12: (1) Prior to sintering, the nano-copper particles are enveloped by an organic coating layer and exist independently of each other.(2) When a relatively low laser energy density is applied to the surface of the copper paste, the residual organic matter within the paste decomposes, causing the nanocopper particles to become closely arranged.(3) As the temperature increases, the nano-particles start to coalesce, with necks forming preferentially between smaller particles.This process is driven by surface diffusion in order to minimize the surface area for densification.Aggregation between particles leads to the formation of multiple particle clusters.(4) Under high energy density, an elevated temperature accelerates particle growth and plastic deformation becomes dominant.The deformed particles fill in pores, resulting in smaller pore sizes and increased density. Evolution Process of the Microstructure during Sintering of Nano-Copper Paste Based on the analysis of the surface morphology of the sintered lines under various laser process parameters as outlined in Section 3.1, Figure 12 illustrates the evolution process of the microstructure during laser sintering of nano-copper paste.The sintering mechanism of nano-copper paste can be elucidated through the four steps depicted in Figure 12: (1) Prior to sintering, the nano-copper particles are enveloped by an organic coating layer and exist independently of each other.(2) When a relatively low laser energy density is applied to the surface of the copper paste, the residual organic matter within the paste decomposes, causing the nano-copper particles to become closely arranged.(3) As the temperature increases, the nano-particles start to coalesce, with necks forming preferentially between smaller particles.This process is driven by surface diffusion in order to minimize the surface area for densification.Aggregation between particles leads to the formation of multiple particle clusters. (4) Under high energy density, an elevated temperature accelerates particle growth and plastic deformation becomes dominant.The deformed particles fill in pores, resulting in smaller pore sizes and increased density. Conclusions This paper primarily investigates the influence of laser process parameters, including laser power, scanning speed, sintering times, and variable parameters, on the surface morphology and electrical properties of sintered nano-copper paste.The experimental results show that the conductivity of the sintering line increases first and then decreases with the increase of the laser power and scanning speed.When the laser power is 2 W, the scanning speed is 50 mm/s, the number of sintering lines is double, the highest conductivity is achieved, and the best average conductivity is 3.46 × 10 6 S/m.Furthermore, the microstructural evolution of nano-copper paste during laser sintering is summarized based on experimental observations.The main conclusions drawn are as follows: (1) This study analyzes the effect of laser power on the sintered lines.It is found that the conductivity of the sintered lines exhibits a trend of first increasing and then decreasing with an increase in laser power.When the laser scanning speed is fixed at 50 mm/s, at low power levels (<1 W), organic matter in the copper paste is removed, and nanoparticles begin to make contact and arrange closely.After laser irradiation, nanoparticles (NPs) primarily exist in point contacts, indicating an initial stage of sintering.As power increases (≥1 W), nano-copper particles interconnect and form necks.With further power increases, sintering temperature rises leading to Ostwald ripening where necks form between particles enhancing densification of the sintered lines, improving conductivity.However, when the power exceeds the optimal sintering level, intense heat results in agglomerated particles melting and solidifying into blocks combined with thermal stress generated by high temperatures, which leads to increased spacing porosity, hindering electron transport and thereby increasing resistance. (2) The impact of laser scanning speed on the surface morphology and electrical properties of sintered lines is examined in this study.At a constant laser power of 2 W, the optimal conductivity is achieved at a scanning speed of 50 mm/s.An increase in scanning speed leads to decreased conductivity due to shorter sintering times, resulting in under-sintering of the copper paste layer.Higher scanning speeds have a relatively low impact on porosity due to inadequate powder coverage.Conversely, lower than optimal scanning speeds expose the nano-copper paste to high temperatures for an Conclusions This paper primarily investigates the influence of laser process parameters, including laser power, scanning speed, sintering times, and variable parameters, on the surface morphology and electrical properties of sintered nano-copper paste.The experimental results show that the conductivity of the sintering line increases first and then decreases with the increase of the laser power and scanning speed.When the laser power is 2 W, the scanning speed is 50 mm/s, the number of sintering lines is double, the highest conductivity is achieved, and the best average conductivity is 3.46 × 10 6 S/m.Furthermore, the microstructural evolution of nano-copper paste during laser sintering is summarized based on experimental observations.The main conclusions drawn are as follows: (1) This study analyzes the effect of laser power on the sintered lines.It is found that the conductivity of the sintered lines exhibits a trend of first increasing and then decreasing with an increase in laser power.When the laser scanning speed is fixed at 50 mm/s, at low power levels (<1 W), organic matter in the copper paste is removed, and nanoparticles begin to make contact and arrange closely.After laser irradiation, nanoparticles (NPs) primarily exist in point contacts, indicating an initial stage of sintering.As power increases (≥1 W), nano-copper particles interconnect and form necks.With further power increases, sintering temperature rises leading to Ostwald ripening where necks form between particles enhancing densification of the sintered lines, improving conductivity.However, when the power exceeds the optimal sintering level, intense heat results in agglomerated particles melting and solidifying into blocks combined with thermal stress generated by high temperatures, which leads to increased spacing porosity, hindering electron transport and thereby increasing resistance.(2) The impact of laser scanning speed on the surface morphology and electrical properties of sintered lines is examined in this study.At a constant laser power of 2 W, the optimal conductivity is achieved at a scanning speed of 50 mm/s.An increase in scanning speed leads to decreased conductivity due to shorter sintering times, resulting in under-sintering of the copper paste layer.Higher scanning speeds have a relatively low impact on porosity due to inadequate powder coverage.Conversely, lower than optimal scanning speeds expose the nano-copper paste to high temperatures for an extended period, posing a risk of oxidation during sintering.Prolonged heat accumulation generates high temperatures, causing surface coarsening of the sintered lines. (3) This study also investigates the effect of laser sintering times on the surface morphology and electrical properties of sintered lines.With a fixed laser power of 2 W, multiple sintering cycles are analyzed at scanning speeds ranging from 20 mm/s to 200 mm/s.It is observed that at lower scanning speeds, multiple sintering cycles under the same power increase the risk of porosity and result in decreased conductivity.However, as the scanning speed increases (≥50 mm/s), multiple sintering cycles have minimal impact on porosity and conductivity without changing the power due to lower energy accumulation.(4) Finally, this study summarizes the microstructure evolution process of sintered copper nanoparticles based on experimental phenomena observed during testing. Figure 1 . Figure 1.(a) 3M tape test.(b) Contact angle between nano-copper paste and substrate without adding PVP.(c) The contact angle between the nano-copper paste and the substrate after adding PVP. Figure 1 . Figure 1.(a) 3M tape test.(b) Contact angle between nano-copper paste and substrate without adding PVP.(c) The contact angle between the nano-copper paste and the substrate after adding PVP. 3 . 1 . The Influence of Different Laser Parameters on the Sintering Process of the Lines 3.1.1.Influence of Laser Power on Sintering Lines Figure 6 . Figure 6.Relationship between porosity, conductivity of sintered lines, and laser power. Figure 6 . Figure 6.Relationship between porosity, conductivity of sintered lines, and laser power. Figure 8 . Figure 8. XRD patterns of sintered samples at different scanning speeds. Figure 8 . Figure 8. XRD patterns of sintered samples at different scanning speeds. Figure 9 . Figure 9. Relationship between porosity and electrical conductivity of sintered lines versus laser scanning speed. Figure 9 . Figure 9. Relationship between porosity and electrical conductivity of sintered lines versus laser scanning speed. Figure 10 . Figure 10.Microscopic morphology of multiple sintered lines at different scanning speeds. Figure 11 . Figure 11.At different scanning speeds, (a) the relationship between electrical conductivity and sintering times.(b) The relationship between porosity and sintering times. Figure 11 . Figure 11.At different scanning speeds, (a) the relationship between electrical conductivity and sintering times.(b) The relationship between porosity and sintering times. Figure 12 . Figure 12.Schematic diagram of the sintering mechanism of copper powder.(a) Un-sintered; (b) organic decomposition and particle contact; (c) neck formation and diffusion; (d) densification and growth.
10,500
2024-08-31T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Analysis of Water Droplet Interaction with Turbulent Premixed and Spray Flames Using Carrier Phase Direct Numerical Simulations ABSTRACT Carrier-phase direct numerical simulations (DNS) of n-heptane/air combustion with water droplet injection are reported in this paper. The influence of water droplet injection has been analyzed for statistically planar gaseous premixed flames and spray flames where the fuel is supplied in the form of monodisperse fuel droplets. The effects of water injection in both types of flames are compared, including their sensitivities on the initial diameter of water droplets. The simulations are based on an Eulerian-Lagrangian-Lagrangian approach to consider the multi-phase interaction and the simultaneous effects of n-heptane and water droplets. It is shown that the two types of flames are different in terms of several aspects. In particular, the spray flames are characterized by lower temperatures, and their propagation speed is lower than for premixed flames. The main reason for these differences lies in the gaseous equivalence ratio experienced by the reaction zone. For the cases investigated here, where the overall equivalence ratio is unity, the spray combustion occurs under predominantly fuel-lean conditions. Furthermore, the flame temperature is influenced by the initial diameter of water droplets, which affects the strength of thermal expansion within the flame. This in turn influences the evaporation characteristics of both fuel and water droplets due to different residence times within the flame. Although the effects of water droplets on the combustion mode are dependent on the relative position within the spray flame, the net effect is a partial shift from non-premixed to premixed mode. At the same time, the spray combustion occurs under relatively fuel-richer conditions with water injection and these trends appear consistently in laminar and turbulent flows. Introduction Nowadays, increasingly stringent rules about pollutant emissions from propulsion and power generation processes create a new challenge and the necessity to implement new techniques to respect these newly imposed limits. In that sense, the direct water injection represents a suitable technique that aids in reducing the NO x and other pollutants that arise from hydrocarbon/air combustion (Kotob, Lu, and Wahid 2020;Shahpouri and Houshfar 2019;Sun et al. 2022). This technique aims to decrease the maximum combustion temperature, which is the main reason for NO x formation. The injection of water inside internal combustion engines was already applied to improve power output for a limited amount of time in internal combustion engines due to increased mass flow rate. Several experimental studies were performed to analyze both the effects on pollutant emissions and performance of Diesel engines with water injection (Arabaci et al. 2015;Mingrui et al. 2017;Tesfa et al. 2012). It was shown that the decrease of peak temperature leads to reasonable control of NO x emissions. In contrast, the effect on other pollutant emissions depends on the engine's operating point regarding the power and fuel efficiency. The increase of power output and fuel efficiency due to water injection was demonstrated also for gas turbine combustors by Lellek (2017). This study focuses on the different physical effects (cooling and dilution) of liquid water injection on conventional premixed combustion as well as spray combustion where fuel is supplied in the form of liquid droplets, considering also the influence of the initial diameter of water droplets. It is already known from previous numerical and experimental analyses (Hayashi, Kumagai, and Sakai 1977;Nicoli, Denet, and Haldenwang 2014;Nicoli, Haldenwang, and Denet 2016) that purely gaseous and multi-phase flames show velocity differences also for identical overall equivalence ratio � ov ¼ � l þ � g , where � l is the liquid equivalence ratio and � g is the gaseous equivalence ratio. The differences are due to multiple phenomena in the two types of flame that lead to different structures. While the premixed gaseous flame is a well-known configuration, the spray flames simultaneously exhibit both premixed and non-premixed combustion modes and the presence of other phenomena like fuel-supplying through liquid phase evaporation that generates locally non-homogeneous conditions. In the previous work of Reveillon and Vervisch (2005), it was demonstrated that spray combustion can exhibit different regimes depending on three nondimensional parameters related to the overall equivalence ratio, mean inter-droplet distance and the ratio between flame and evaporation timescales. However, the flame propagation behavior differs in these regimes due to the differences in gaseous equivalence ratio. This quantity is determined from several parameters, like the initial droplet diameter, the residence time or the initial liquid equivalence ratio, as numerically demonstrated by Neophytou and Mastorakos 2009) through the simulation of laminar one-dimensional flames in fuel droplet mist. It is demonstrated that in some cases � g in fuel droplet combustion can fall in a range of values that facilitate higher burning velocity than the corresponding premixed flame for identical � ov . This behavior originates not only because � g can get closer to stoichiometric conditions than the overall equivalence ratio but also due to the generation of radicals from hydrocarbon pyrolysis as suggested by Neophytou, Mastorakos, and Cant (2012) through the simulation of igniting flames with a reduced chemical mechanism. The gaseous equivalence ratio affects not only local consumption rate of reactants but also the flame surface area and their combined effects are reflected in the overall burning rate as demonstrated by Ozel-Erol et al. (2019) who analyzed the evolution of spherically expanding flames propagating into fuel mists at different overall equivalence ratios and initial droplet diameters. It was also shown in previous studies (Hayashi, Kumagai, and Sakai 1977;Nicoli, Haldenwang, and Denet 2019) that the disturbances induced by fuel droplets on the flame surface can trigger the Darrieus-Landau instability, hence increasing the flame wrinkling and overall burning rate in this way. Such effects of droplet-flame interaction are not observed in the present study because the parameters considered here are out of the range in which these phenomena are dominant. Water evaporation has three main effects on the combustion process: dilution of reactants, latent heat extraction and density increase in the system. In this work, effects related to chemical kinetics (such as the comparably high third-body efficiency of water vapor) are not taken into account because they are considered higher order effects compared to the physical effects. The thermal effects of water injection make it suitable for different applications related to internal combustion engines and power generation but also related to plant safety and fire suppression. Thomas, Jones, and Edwards (1991) experimentally demonstrated the effectiveness of liquid water for explosion suppression, showing that flame quenching is not only due to the heat sink effect but also due to the increase in energy and mass transfer caused by droplet breakup. In recent years, Zhang et al. (2014) tested the ability of water micro-particles to decrease the pressure buildup in closed vessel explosions at different fuel to air ratios. However, if designed inappropriately, the water injection can also enhance flame propagation due to the triggering of fluid-dynamic instability. Nicoli, Haldenwang, and Denet (2019) numerically showed that the burning velocity is not necessarily attenuated in the presence of water mist. Arias et al. (2011) extrapolated a criterion to predict if quenching in diffusion flames occurs due to the simultaneous effects of stretching and water dilution. Several experimental studies on multi-phase combustion with water injection are present in the literature (Merola, Irimescu, and Maria Vaglieco 2020;Takasaki, Fukuyoshi, and Abe 1998). Since experiments are often limited to a global characterization of combustion systems, DNS is of paramount importance to reach a better understanding of the local processes underlying droplet-flame interaction. To the best of the authors' knowledge, numerical studies that analyze the simultaneous injection of two liquid substances with different thermo-physical properties (cf. Table 1) in the combustion context are currently not available in the literature. For this purpose, a two-way coupled Eulerian-Lagrangian-Lagrangian approach is adopted to analyze the effects of simultaneous interaction of water and fuel droplets with the flame. The comparison between premixed combustion and spray combustion with water injection is at the focus of this paper. The manuscript is organized as follows. The 2 nd section introduces the mathematical model and its implementation is described in the 3 rd section. The 4 th section discusses the results and in the 5 th section, the conclusions are finally presented. Mathematical background Although it is possible to perform three-dimensional DNS with detailed chemical mechanisms (Chen et al. 2009;Klein et al. 2020), this results in high computational costs, making it unattractive for extensive parametric studies. Furthermore, it has been shown that flame propagation statistics from simple chemistry DNS are in good agreement with detailed chemistry and transport data (Keil et al. 2021). For this reason, a modified single-step irreversible Arrhenius chemical reaction is used to model the chemistry, with the following general formulation: Fuel þ s Oxidizer ! ð1 þ sÞ Products (1) From equation 1, the fuel reaction rate is computed by an Arrhenius-type expression: where ρ is the gas density, B � is the normalized pre-exponential factor, Y F and Y O are fuel and oxygen mass fraction, β is the Zel'dovich number, α is a heat release related parameter α ¼ τ=ð1 þ τÞ, T ¼ ðT À T 0 Þ=ðT ad À T 0 Þ is the normalized temperature, where T is the dimensional temperature, T ad is the adiabatic flame temperature of the stoichiometric mixture, and T 0 is the unburned gas temperature. In this work, the unburned temperature T 0 is taken as 300K, which yields a heat release factor of τ ¼ ðT ad À T 0 Þ=T 0 ¼ 6:54 for stoichiometric n-heptane/air combustion. The heat of combustion, the Zel'dovich number and the activation energy are taken as functions of the local equivalence ratio as proposed by Fernández-Tarrazo et al. (2006). This model was validated in previous work (Hasslberger et al. 2021), showing that the model captures well the laminar burning velocity trend as a function of the steam concentration. More advanced chemical models, considering a higher number of species and elementary reactions than the modified single-step model adopted here, are not always able to capture the correct burning velocity variation with equivalence ratio. The available skeletal models for n-heptane that have this ability are based on 110 species and 1170 reactions (Zeuch et al. 2008), for example. Other computationally bearable reduced models, e.g. based on a two-step chemistry (Bibrzycki and Poinsot 2010;Turquand d'Auzay et al. 2019), do not show any advantage compared to the current one since they require a similar degree of tuning to reproduce the qualitative trends of the burning velocity. Inspired by the previous work of Reveillon and Vervisch (2005), a two-way coupled Eulerian-Lagrangian-Lagrangian approach is used. The liquid phase is treated by a Lagrangian approach, in which the individual droplets of water and n-heptane are tracked along their trajectories and considered as point particles. The following equations describe their dynamics: The variables x d , ũ d , a d and T d are droplet position, velocity, diameter and temperature respectively. Here, L v is the latent heat of vaporization and C g p is the gaseous specific heat at constant pressure. In equation 3, the hypothesis of infinite thermal diffusivity is made, i.e. the droplet temperature is assumed as uniform. The relaxation times τ u d , τ p d and τ T d in the system of equations are calculated through the following formulas: where the timescales are expressed as functions of droplet density ρ d and corrected drag The timescales also require the Schmidt number Sc, the Spalding number To evaluate the Spalding number, the gaseous species mass fraction at the droplet surface Y S i is necessary (index i represents n-heptane, F, or water, W) and it is calculated from equilibrium conditions via the partial pressure P S i obtained by the Clausius-Clapeyron equation: Here, W g and W l are the molar mass of the gaseous phase and liquid phase, P ref is the reference pressure and T S ref is the boiling temperature at the reference pressure. The gaseous phase equations can be written in the following general form: ω φ is a source term related to chemical reaction, while _ S g is a source or sink term of the conservation equation, like pressure gradients in the momentum equation, and _ S φ incorporates the carrier phase-droplet interaction. This last term is computed as where the φ d values are computed at the droplet position x d via tri-linear interpolation from the eight surrounding grid points. The dynamic viscosity is assumed constant and all other diffusivities are derived from the constant Schmidt number and Prandtl number (Pr ¼ Sc ¼ 0:7). The Lewis number in this simulation is assumed to be unity for all the chemical species considered. The gaseous phase is considered a perfect gas, i.e. the specific heats are constant and their ratio is Numerical implementation A three-dimensional compressible DNS code, SENGA+ (Jenkins and Cant 2002;Schroll et al. 2009;Wandel and Mastorakos 2009), is used. The spatial discretization is performed by a 10 th order central finite difference scheme for internal grid points that degrades to a one-sided 2 nd order scheme at non-periodic boundaries. Explicit time advancement is realized by a 3 rd order low-storage Runge-Kutta method (Wray 1990). The computational domain is the same as in previous spray combustion DNS studies (Hasslberger et al. 2021;Malkeson et al. 2020;Wacks et al. 2016), i.e. a rectangular volume of dimensions 30δ st � 20δ st � 20δ st , where δ st is the thermal thickness of the unstretched laminar premixed stoichiometric n-heptane/air flame: The domain is discretized by 384 � 256 � 256 grid points such that more than 10 points are used to resolve δ st and about 2 points to resolve Kolmogorov's length scale η. As in the several earlier publications using SENGA+ Hasslberger et al. 2021;Malkeson et al. 2020;Wacks et al. 2016), both the inner flame structure as well as the turbulence characteristics are accurately captured in this way. It has been found that coarsening the mesh by a factor of 2.0 did not change the values of the unstretched laminar burning velocity of the stoichiometric n-heptane/air mixture S L and the thermal flame thickness of the stoichiometric mixture δ st appreciably ( < 1%). The boundary conditions are periodic in transverse directions, y and z, while boundaries are partially non-reflecting in longitudinal direction x, which are via the Navier-Stokes Characteristic Boundary Conditions (NSCBC) technique. The turbulent flow is initialized by an incompressible velocity field that exhibits a Batchelor-Townsend turbulent kinetic energy spectrum (Batchelor and Townsend 1948) with intensity u 0 =S L ¼ 4:0 and normalized integral length scale L 11 =δ st ¼ 2:5. The same field is used as an inlet velocity condition to maintain turbulence at the inlet during the simulation. The initial thermo-chemical conditions are taken from one-dimensional laminar simulations carried out through COSILAB using the model by Neophytou and Mastorakos (2009). In this work, different spray combustion simulations are compared with premixed combustion cases with the same overall equivalence ratio � ov ¼ 1, overall water loading 1 and varying water droplet initial diameter a d (cf. Table 2). The droplets considered here are modeled as point particles because their dimensions with respect to the Kolmogorov length scale are a d =η ¼ 0:06, 0:12 for a d =δ st ¼ 0:02, 0:04, respectively. Hence, the initial diameter with respect to grid spacing is a d =Δx ¼ 0:25 and 0:5. The ratio between initial droplets diameter and grid size is comparable with previous similar analysis (Fujita et al. 2013;Neophytou et al. 2010;Sreedhara and Huh 2007;Wang and Rutland 2005). Since the droplet mist is sufficiently diluted, a two-way coupling approach is adopted without considering particle-particle interactions. The particle number where D 0 is the gas diffusivity, and this simulation time corresponds to 3 eddy turnover times t eddy ¼ L 11 =u 0 . All these parameters are similar to previous spray combustion DNS studies (Ozel-Erol et al. 2019;Fujita et al. 2013;Hasslberger et al. 2021;Pera, Chevillard, and Reveillon 2013;Wang and Rutland 2005). Showing that the numerical results qualitatively agree with previous experiments (Hayashi, Kumagai, and Sakai 1977;Lawes and Saat 2011), these previous works also underline the suitability of the current methodology. The reaction progress variable c as used in the following evaluations is computed from the oxygen mass fraction: where z is the mixture fraction defined as O is the oxygen mass fraction in the pure air stream. The progress variable is identically 0 for unburned conditions and 1 for burned conditions, while z is 0 in the pure oxidizer stream and 1 in the pure fuel stream. The progress variable allows to take statistics conditional on the location relative to the flame in the following. Results and discussions Under the unity Lewis number assumption in single-phase combustion, normalized temperature and progress variable fields are identical to each other for a low Mach number and globally adiabatic condition. An evaporating liquid phase is present in the cases studied here, which extracts latent heat for evaporation. For this reason, the assumption c � T is no longer valid. It can be observed in the first row of Figure 1, where the case of premixed combustion with water injection is visualized, that the c iso-surface is smooth apart from wrinkles due to turbulence. In contrast, the T iso-surface features small-scale deformations in the proximity of evaporating water droplets. The water evaporation generates local temperature gradients but does not influence the local equivalence ratio and mixture fraction because the mass fractions of reactants are modified by the same factor due to the local steam release, i.e. density change. In spray combustion, the evaporation of fuel droplets affects the local equivalence ratio and thus also local value of the progress variable. In the second row of Figure 1, both iso-surfaces feature small dimples in the proximity of droplets; in c, these occur only near n-heptane droplets, while in T, the deformations are caused by both droplet types. These observations are in qualitative agreement with the experimental results of Hayashi, Kumagai, and Sakai (1977) and Lawes and Saat (2011) regarding the flame wrinkling induced by fuel droplets. The x À y midplane temperature fields are shown in Figure 2 for all the cases simulated. First, it is important to note that spray combustion without water injection results in a colder flame than for the purely premixed case. The two main reasons for this difference are the heat extraction due to liquid n-heptane vaporization and the difference in the gaseous equivalence ratio � g , which locally determines the chemical reaction rate. As mentioned before, all cases considered here feature the same overall equivalence ratio � ov ¼ 1. In the premixed cases, this is equivalent to stoichiometric combustion conditions, while for spray combustion cases, the gaseous equivalence ratio is predominantly less than unity, i.e. fuel-lean conditions. A crucial consequence of lower flame temperature is the attenuated thermal expansion and lower mean flow velocity on the burned gas side. Hence, the droplets move slower due to lower mean flow velocity, which implies a higher residence time inside the flame. One important effect of liquid water injection is the extraction of latent heat due to droplet evaporation, which is enhanced with small water droplets because the evaporation Figure 1. Iso-surface of progress variable c (1 st column) and temperature T (2 nd column) taken at c ¼ 0:9 and T ¼ 0:9 for premixed combustion (1 st row) and spray combustion (2 nd row) at t ¼ 1:1t chem . In all cases, the water droplet size is ða d =δ st Þ W ¼ 0:04. The light blue dots are water droplets, while the red dots indicate n-heptane droplets (not to the scale). rate is strongly dependent on the droplet size. The water has a much higher latent heat of vaporization than n-heptane, as can be seen in Table 1. For this reason, the n-heptane droplets rarely pass through the flame while water droplets persist also in the burned gas region. It may also be observed that the cooling effect on temperature is more homogeneous with a higher number of smaller water droplets than with a lower number of larger water droplets. Smaller droplets evaporate faster and the mean droplet inter-distance is also smaller at identical overall water loading Y W . In Figure 3, it is evident that in the spray combustion cases the steam concentration is higher than in the premixed cases. These effects are related to the flame temperature and thermal expansion as mentioned before. Note that the water vapor concentration Y g W Figure 2. Temperature field T on the x À y mid-plane, for premixed combustion (left column) and spray combustion (right column) cases without water injection (1 st row), with ða d =δ st Þ W ¼ 0:04 (2 nd row) and ða d =δ st Þ W ¼ 0:02 (3 rd row). The white dots are the n-heptane droplets, while the pink dots indicate the water droplets (not to the scale). White iso-contours represent c ¼ 0:1, 0:5 and 0:9, respectively. The results are taken at t=t chem ¼ 4:1. considered here is only due to liquid water evaporation, it excludes the water produced by the chemical reaction. When the flame temperature decreases and the reactants are diluted, flame thickening occurs, as can be observed when comparing the two types of flames in Figures 2 and 3. The thicker the flame, the higher the droplet residence time within the flame, which implies more evaporation and more homogenization by diffusion. As a consequence, the flame burns under fuel-richer conditions and under a relatively increased premixed combustion mode, as will be elaborated later. The higher water vapor concentrations inside the flame for spray combustion cases in comparison to premixed combustion cases are quantified in Figure 4. These quantitative results agree with what can be observed qualitatively from the slices. It is also evident that the steam concentration inside the flame is more than two times higher with the smaller water droplets than with the larger water droplets which is consistent with the evaporation law depending quadratically on the droplet size. In the spray combustion case with the smaller water droplets, the peak value of the gaseous water concentration Y g W is close to the overall water loading of Y W ¼ 0:1. Figure 5 reports the turbulent burning velocity, defined by The white dots are the n-heptane droplets, while the pink dots indicate the water droplets (not to the scale). White iso-contours represent c ¼ 0:1, 0:5 and 0:9, respectively. The results are taken at t=t chem ¼ 4:1. for the different cases with and without water injection. It can be seen from Figure 5 that the behavior of S T in the two types of flames is largely different because S T in the premixed cases shows an increasing trend before reaching a quasi-steady state, while the spray combustion cases maintain more or less constant values of S T . It is also clear that the water addition affects the burning velocity significantly. Already with the larger water droplets, the turbulent burning velocity with water injection is lower than in the reference cases without water. With smaller droplets, the deficit in burning velocity becomes even stronger. The water addition effects on burning velocity are mainly due to the cooling effect which directly influences the chemical reaction rate. The turbulent burning velocity is strongly related to the flame area, as can be seen from the comparison of Figures 5 and 10 which are showing similar general trends. Hence, it is interesting to decouple the flame area effects from the burning velocity by examining the burning rate per unit area Ω: For a purely premixed case at moderate turbulence intensities, the value of Ω=ðρ 0 S L Þ is close to unity as postulated by Damköhler's first hypothesis, thus the deviation from this value indicates the departure from the purely premixed case (Hasslberger et al. 2021). Consistent with earlier observations, the spray combustion cases show a different trend to premixed combustion in Figure 6. The water injection lowers the burning rate per unit area. However, in the larger water droplet cases, this effect remains marginal, while it becomes significant with smaller droplets. From Figures 5 and 6, it is possible to conclude that the nature of the fuel supply to the flame (i.e. entirely in gaseous phase for the premixed flame or in the form of droplets in the spray flame) is responsible for the main difference between the cases. Nevertheless, to isolate the effects of water injection and to evaluate the different sensitivity to initial droplet diameter, it is instructive to measure the relative changes with respect to the reference value Ω ref without water injection for the corresponding flame. In Figure 7, the trends of the relative burning velocity per unit area Ω droplets =Ω ref are reported. Apparently, the relative burning velocity follows a similar trend for both premixed and spray combustion in the case with large water droplets. This means that the differences on the left side of Figure 6 are mostly related to the nature of the fuel supply to the flame (i.e. entirely in gaseous phase or in the form of droplets) and not due to the different responses to water injection. However, a different behavior can be observed in the case with smaller water droplets, where the spray combustion case shows a higher sensitivity to water injection. This holds true especially in the first part of the simulation, in which Ω droplets =Ω ref decreases to considerably different values of around 0.8 and 0.4, respectively. The spray combustion cases are characterized by a wider flame thickness than the premixed combustion cases which is associated to the changes in flame temperature. The flame thickness δ SDF can be defined as where jÑcj is the flame surface density function (SDF) and hQi stands for an average conditional on the progress variable c. Consistent with earlier observations, the effect on flame thickness is more substantial with smaller water droplets as can be seen in Figure 8. The normalized surface density function is shown in Figure 9 exemplarily for t ¼ 3:0t chem , where both premixed and spray combustion cases show the expected trends. It can again be observed that the spray combustion case shows a higher sensitivity. In the cases with smaller water droplets, the flame thickness increases by a factor of about 1.5 with respect to the reference case without water injection, whereas it is only a factor of about 1.2 in the case with premixed combustion. The flame area and turbulent burning velocity are strongly related as already stated. The flame area is computed as (13) and shown as a function of time in Figure 10. In the configuration examined, the flame area evolves due to the interaction with turbulence and droplets. Owing to the moderate turbulence intensity investigated, the turbulence has a flame-area enhancing effect due to flame wrinkling. In contrast, the flame-droplet interaction could have an increasing or decreasing effect on the flame area generation depending on the heat extraction and concentration field deformation. As already observed in Figure 1, the water droplets barely affect the reactant concentration fields. Hence, the dominant consequence on the flame area is related to the (local) flame temperature reduction, i.e. gas density elevation. It is important to mention in this context that flame thickening makes the flame more resistant to wrinkling due to turbulent velocity fluctuations. On the contrary, the n-heptane droplets strongly affect the local mixture fraction and the progress variable field. For this reason, the spray combustion cases have a higher flame area than premixed cases during the initial interaction stage, i.e. until approximately 1.5 chemical time scales. After that, the dampening of flame surface disturbances becomes dominant, resulting in a lower flame area for the spray cases. Since normalized temperature and progress variable are not coincident in the cases with multi-phase combustion and phase change, it is instructive to calculate also the flame area based on the nondimensional temperature field: With respect to this quantity, also the water droplets have a noticeable effect on the flame area due to local deformations of the temperature field, as can be noticed in Figure 1. The temporal evolution of the temperature-based flame area A T is shown in Figure 11. The flame area in the final stage of the simulation becomes higher in the cases with water injection than in those without (at least with the larger water droplets). This is a consequence of the water droplets evaporating more slowly, i.e. their effect becomes evident later as compared to n-heptane droplets. The evaporation rate is stronger for smaller droplets, which induces stronger cooling effects and as a result the turbulence augmentation due to thermal expansion is weaker for smaller droplets (Hasslberger et al. 2021). The weaker turbulence intensity within the flame for smaller droplets also give rise to a reduction in flame surface area with decreasing initial water droplet diameter. In spray combustion, this becomes the dominant mechanism, whereas in premixed combustion with water injection, the flame area can become even higher than in the purely premixed case due to droplet-induced wrinkling of temperature iso-surfaces. In order to explain the effects of water injection on overall burning rate and flame surface area in spray flames, it is useful to look into the mixture composition and the mode of combustion within the reaction zone, which will be analyzed next in this paper. Figure 12 shows probability density functions (PDFs) of gaseous equivalence ratio PDFð� g Þ inside the flame (0:1 � c � 0:9) for the spray combustion cases for different initial water diameters. The equivalence ratio distributions inside the spray flames in Figure 12 show richer conditions and a somewhat broader spectrum of local gaseous equivalence ratio values with water injection. The behavior of the � g distribution can be explained by the cooling effect of the water on the temperature field, which has a direct influence on the flow velocity field. The latter affects the mixing and residence time of the droplets inside the different regions of the flame. These effects are elaborated in the subsequent analysis. The trend observed in Figure 12 for turbulent cases is confirmed for laminar cases in Figure 13. In this way, the direct consequences of water injection are isolated from the indirect consequences related to turbulence. As expected, the discrepancy with the case without water injection is less visible for larger water droplets. Although Figures 12 and 13 seem to suggest that the mixture field becomes less homogeneous with water injection, i.e. enhanced non-premixed nature of the flame, the following combustion mode analysis shows an opposite trend. The mixture inhomogeneity within the flame has implications on the mode of combustion, which can be characterized in terms of the flame index proposed by Yamashita, Shimada, and Takeno 1996): The value of this index is bounded between À 1 and 1, where a predominantly nonpremixed combustion behavior is expected with a negative value, while the flame is predominantly premixed with a positive value. In Figure 14, the distribution of the combustion mode within 0:1 < c < 0:9 and the heat release contributions from different modes of combustion are presented. It is possible to observe the increased premixed nature of the flame with water injection. Moreover, the flame becomes increasingly premixed as the water evaporation becomes more efficient. In Figure 14, it is interesting to note that the heat release is predominantly due to premixed combustion, while the non-premixed mode combustion mainly characterizes the structure of the entire flame (including the preheating and heat release zone). The behavior of the � g distribution has been explained by the cooling effect of the water on the temperature field, which has a direct influence on the flow velocity field. The latter affects the mixing and residence time of the droplets inside the different regions of the flame as stated earlier. Accordingly, the mixing rate is less intense with water injection. This promotes regions of high fuel concentration in the proximity of the n-heptane droplets while they are evaporating. The attenuated thermal expansion is likely to be the reason for the fuel-richer conditions, on average, with water injection. The flame thickening also causes an increase in the evaporation of the droplets within the flame in addition to the effects induced by flow velocity reduction. In a thicker flame, the droplets need more time to reach the high-temperature zone. All these phenomena related to the decrease of temperature act in the sense of stretching the space of fuel droplet evaporation which results in increased mixture inhomogeneity. While this is consistent with the widening of the probability density function of � g , it cannot explain the trend of the flame index analysis, i.e. the increase of premixed combustion mode with water injection and decreasing initial diameter of the water droplets. The decrease of the water droplet diameter makes the effect of water evaporation on the flame more homogeneous. Furthermore, due to lower flame temperature, thus lower reactivity, the width of the mixing-dominated region of the flame (closer to the unburned side) relatively increases compared to the reaction-dominated region of the flame (closer to the burned side). Consequently, the time for mixing increases and the reaction-dominated region of the flame occurs under a more uniform mixture. These two effects might explain why the flame becomes more premixed with water injection and why this tendency strengthens with smaller droplets. Table 3. Flame thickness δ SDF , flame brush thickness δ FB and fuel droplet mean residence time t res ¼ δ i =hui i (where i stands for SDF or FB depending on whether the flame thickness or flame brush thickness is considered). hui i stands for the averaged velocity of the fuel droplets within the flame ðSDF; 0:1 � c � 0:9Þ or within the flame brush ðFB; 0:1 � c � 0:9Þ. The statistics are taken for all the spray combustion cases at t ¼ 3:0t chem . The L stands for laminar and in this case δ st indicates the unstretched stoichiometric premixed flame thickness. Tables 3 and 4 show the mean fuel and water droplet residence times within the flame for the different spray combustion cases. Due to turbulent velocity fluctuations, the absolute droplet velocities, on average, are higher in the turbulent cases than in the corresponding laminar cases. Moreover, the difference between the flame brush thickness δ FB and flame thickness δ SDF is obviously more significant with turbulence. In this context, the flame brush thickness is defined as: where the operator Q indicates spatial averaging of Q in both homogeneous directions ðy À zÞ. In the turbulent cases, the definition of the mean droplet residence time within the flame is less straightforward, because the droplets do not follow a straight trajectory through the flame. Using the flame brush thickness considers the active region of the turbulent flame in the physical space. As demonstrated by the Tables 3 and 4, the droplets consistently show a higher residence time with water injection. The reasons for this, i.e. flame thickening and attenuated thermal Table 4. Flame thickness δ SDF , flame brush thickness δ FB and water droplet mean residence time t res ¼ δ i =hui i (where i stands for SDF or FB depending on whether the flame thickness or flame brush thickness is considered). hui i stands for the averaged velocity of the water droplets within the flame ðSDF; 0:1 � c � 0:9Þ or within the flame brush ðFB; 0:1 � c � 0:9Þ. The statistics are taken for all the spray combustion cases at t ¼ 3:0t chem . The L stands for laminar and in this case δ st indicates the unstretched stoichiometric premixed flame thickness. expansion, have been discussed earlier and the consequences on the flame structure are discussed in the following. The flame index distribution in Figure 15 shows a predominantly non-premixed combustion mode in the first quarter and then the flame becomes predominantly premixed. In this first part, the dominant phenomenon is the droplet evaporation, generating many high fuel concentration spots. Thereafter, the mixing becomes dominant and redistributes the evaporated fuel. In the last part, where the reactivity becomes dominant, the heat release is mainly due to premixed combustion and only a small percentage is due to non-premixed mode of combustion. Note that the heat release rate peaks at c � 0:7 À 0:8 for the present thermo-chemistry. In Figure 15, the cases with water injection show a stronger nonpremixed mode than the cases without water in the first half of the flame, whereas from c � 0:5, there is an inversion and the cases with water injection tend to exhibit more premixed mode of combustion than the cases without water. This tendency is getting stronger when the initial diameter of the water droplets decreases. As previously mentioned, the flow is dominated by evaporation and mixing in the first part of the flame. The turbulent transport is less intense with water injection and so the mixing is less effective. However, due to the flame thickening and lower thermal expansion, the space and the time for mixing increase, leading to enhanced homogeneity towards the burned gas side of the flame. Conclusions The carrier-phase DNS conducted in this work are based on a hybrid Eulerian-Lagrangian-Lagrangian approach with two-way coupling between the gaseous phase and different kinds of liquid phases, i.e. evaporating fuel droplets and water droplets. It is shown that premixed flames and spray flames respond differently under the influence of water injection despite having the same overall equivalence ratio. The main conclusions of the present work are: • The discrepancies between gaseous premixed and spray flames are mainly due to the difference in the gaseous equivalence ratio, which remains lower in spray combustion cases in comparison to the overall equivalence ratio. In the simulations considered here, where a stoichiometric overall equivalence ratio is imposed, the combustion in spray flames occurs under predominantly fuel-lean conditions, hence the flame temperature is lower than for gaseous premixed cases. Since the flame temperature directly affects the flame thickness and the strength of thermal expansion, it also affects the residence time of the droplets inside the flame and increases the consequences associated with them. Accordingly, the water droplets alter the burning velocity and mean flow velocity via the cooling effect on gas temperature. In summary, the differences in droplet residence time and flame structure make the spray combustion more sensitive to the variation of initial water droplet diameters. • In terms of the flame morphology, the consequences of water droplets are mainly reflected by the temperature field, whereas the n-heptane droplets deform both the temperature and progress variable fields. • The net effect of water injection is a partial shift from non-premixed to premixed mode. At the same time, the spray combustion occurs under relatively fuel-richer conditions with water injection and these trends appear consistently in laminar and turbulent flows. Both observations are a consequence of the flame cooling which also affects the mixing characteristics within the flame. Both the mixing-and reaction-dominated region of the flame are stretched as a result of the flame thickening. This work initiates potential future investigations on spray combustion with water injection. For example, one may consider a broader range of important parameters, like initial droplet diameter, overall water loading and overall equivalence ratio to evaluate the consequences of water injection as a means of manipulating the combustion process. Finally, it will be worthwhile to validate the findings of the current analysis in the presence of a detailed chemical mechanism to investigate the effectiveness of water injection on pollutant emission reduction in terms of quantitative predictions.
9,252.6
2023-02-28T00:00:00.000
[ "Physics" ]
A Study of the Effects of Different Indoor Lighting Environments on Computer Work Fatigue The indoor lighting environment is a key factor affecting human health and safety. In particular, people have been forced to study or work more for long periods of time at home due to the COVID-19 pandemic. In this study, we investigate the influence of physical indoor environmental factors, correlated color temperature (CCT), and illumination on computer work fatigue. We conducted a within-subject experiment consisting of a 10 min-long task test under two different illumination settings (300 lx and 500 lx) and two CCTs (3000 K and 4000 K). Physiological signals, such as electroencephalogram (EEG), electrocardiograph (ECG), and eye movement, were monitored during the test to objectively measure fatigue. The subjective fatigue of eight participants was evaluated based on a questionnaire conducted after completing the test. The error rate of the task test was taken as the key factor representing the working performance. Through the analysis of the subjective and objective results, computer work fatigue was found to be significantly impacted by changes in the lighting environment, where human fatigue was negatively correlated with illumination and CCT. Improving the illumination and CCT of the work environment, within the scope of this study, helped to decrease the fatigue degree—that is, the fatigue degree was the lowest under the 4000 K + 500 lx environment, while it was relatively high at 3000 K + 300 lx. Under indoor environment conditions, the CCT factor was found to have the greatest effect on computer work fatigue, followed by illumination. The presented results are expected to be a valuable reference for improving the satisfaction associated with the lighting environment and to serve as guidance for researchers and reviewers conducting similar research. Introduction Work fatigue is a suboptimal psychophysical condition, caused by continuous work [1]. The associated symptoms usually include poor eyesight; blurred vision; and, in severe cases, brain fatigue and headaches. With the increasing popularity of computers and the diversification of software functions, computer work has become the most effective form of work in this era. Computer vision syndrome (CVS) describes a group of eyeand vision-related problems, such as vertigo and dry eyes, resulting from prolonged computer use [2]. The major factors associated with CVS are either environmental (improper lighting, display position, and viewing distance) and/or dependent on the user's visual abilities [3]. Nearly 60 million people suffer from CVS globally, especially undergraduate students [4], resulting in reduced productivity at work and reduced quality of life of computer workers [5]. However, visual fatigue and brain fatigue caused by prolonged computer use can severely the impair cognitive functions of users, reducing the worker's operating ability and work accuracy [6]. Therefore, computer work fatigue detection has become a research hot spot in the fields of driving and workplace ergonomics. As an indispensable part of the indoor environment, artificial lighting largely affects the physiology and psychology of individuals, such as their circadian rhythm, human performance, emotion, and cognition [7,8]. The constancy of the luminous flux in the light environment, and the associated flickering effect, play a crucial role in visual and brain fatigue. Studies have shown that the adverse physiological reactions caused by stroboscopic flicker include distraction and vision loss [9]. With the popularization of photobiological effects theory [10], the role of nonvisual effects in the evaluation of lighting environments has gained more attention. Zhonggui-Yin et al. [11] conducted a clinical analysis of 503 cases of asthenopia and found that the lighting environment had a greater impact on asthenopia. Soo Young Kim et al. [12] showed that fluctuations in work illumination can affect the copywriting of office workers, as well as visual inspection by VDT-related staff. Llinares et al. [13] have investigated the effect of light on student performance and showed that, while attention improves with higher light levels, memory improves with lower light levels. Furthermore, higher CCTs generated better performance in both attention and memory tasks. Additionally, different lighting environments have different effects on visual fatigue. Mingwei-Xu et al. [14] found that the visual fatigue of operators increased as the light intensity increased under various illumination environments (400 lx, 550 lx, 700 lx). Zhang Rui et al. [15] demonstrated that people under high illumination at 1000 lx are more inclined to sustain attention, and 300 lx + 4000 K was recommended for university architectures. MG Figueiro et al. [16] demonstrated that lighting systems delivering a circadian stimulus (CS) value ≥ 0.3 can reduce sleepiness and increase alertness in office workers. Cajochen et al. [17] indicated that high-intensity light can increase alertness and reduce drowsiness compared to low illumination. Banu Manav [18] demonstrated a preference for a 4000 K color temperature, compared to 2700 K, for impressions of "comfort and spaciousness" in an office.In his study, the participants liked "the combined color temperature mood" and the majority considered using it in their offices. Fatigue quantification methods can be mainly divided into four categories: subjective scale methods, objective experiment methods, observation methods, and physiological index measurement methods [19]. A subjective scale method requires subjects to evaluate their fatigue through a questionnaire. By using a subjective evaluation method, Mingli Lu et al. [20] concluded that illumination has a very substantial effect on the perception of fatigue and relaxation. Rachmawati et al. [21] used questionnaires for a fatigue rating scale, based on work fatigue, to describe the overall work fatigue assessment. Physiological signal measurement methods can provide a realistic indication of fatigue, mainly through monitoring physiological parameters (e.g., EEG, ECG, Eye Movements). Y Wang et al. [22] studied the visual fatigue associated with VDT work, and showed that physiological signals have a high correlation with subjective visual fatigue. The rhythm waves in EEG, such as low-frequency waves (θ waves and α waves) and high-frequency waves (β waves), can effectively reflect changes in fatigue and alertness [23][24][25]. The ratio of different frequency waves in the EEG can also be used for the assessment of fatigue. Zhang et al. [26] found that the parameters β/α and (α + θ)/β can be used to measure fatigue. ECG is also a reliable technique for detecting fatigue. Even simple recording of changes in heart rate can be used as an indicator of fatigue detection. Takahashi et al. [27] used a multiple linear regression model to assess driver fatigue and concluded that the three indicators of the ECG signal (i.e., HR, HF, and RMSSD) showed remarkable changes as the level of driving fatigue increased. Physiological monitoring methods based on eye movements have the advantage of noninvasiveness. Studies have shown that pupil diameter, saccade speed, and fixation number are all associated with the development of work fatigue. [28]. Hu Xinyun et al. [29] overloaded subjects for an hour and found that the blink duration, average saccade speed, and saccade duration all increased, while the diameter of the pupil decreased. In this study, we explore the relationship between different lighting environments and computer work fatigue through a combination of subjective, behavioral, and physiological aspects. Subjectively, the fatigue degree is quantified through a fatigue self-test questionnaire. Behaviorally, a task test is performed, and the degree of work fatigue is represented by task performance (error rate). Physiologically, the degree of fatigue under each light environment is estimated by collecting changes in EEG, ECG, and Eye Movements. Although studies based on the relationship between human fatigue and the illuminated environment have become popular, the underlying functions remain unclear, as most research has been conducted in laboratory settings using inappropriate instruments. In particular, multifactorial analyses remain rare, due to the combination of various physiological indicators. Experimental Methodology The experiment was performed in a full-scale mock-up office space, with dimensions of 210 cm-wide, 190 cm-deep, and 300 cm-high. Figure 1 shows the detailed layout of the mockup space. There were no windows in the room to prevent natural light from interfering with the experiments. Two desks with dimensions of 1. To prevent the influence of external light on the experiment, the inner walls of space were covered with black cloth to avoid any specular reflection. Considering that there were obvious gradient changes of lighting factors in the environmental setting, glare conditions generated by the position of the light source were avoided as much as possible in order to create a good visual environment for the subject. Therefore, the vertical distance between the lamp stand and the desktop and the horizontal distance between the subject and the computer screen in this experiment was no less than 60 cm. The lighting equipment included Philips T5 bracket lamps, which meet the needs of indoor lighting. A desktop computer was installed to perform the test tasks. The visual display terminal adopted an IPS (In-Plane Switching)-type LCD panel, ensuring the stability of the picture and preventing the flicker effect. In addition, it had an ultra-low radiation value, about 0.11 µT. The specific parameters of the experimental equipment are provided in Table 1. LED tubes were placed on a shelf at an appropriate distance. Increasing or decreasing the number of tubes was conducted to meet the required illumination or correlated color temperature (CCT) levels for the experiment. During the experiment, various measures were taken, such as measuring the light environment parameters in real-time and turning off the automatic brightness adjustment function of the display terminal in order to maintain the constancy of the luminous flux, reduce the flicker effect, and avoid affecting the experimental results. Working Conditions The architectural lighting design standard GB 50034-2013 stipulates that the color temperature of ordinary offices is required to be in the range of 3300-5300 K, with the standard illumination value of 300 lx. Considering the photobiological safety of LEDs, the color temperature of LED light sources should be no more than 4000 K. In order to avoid the influence of human thermal sensation differences in the experiment, it was necessary to ensure that the subjects were in the thermal comfort range. In [30], human thermal comfort has been discussed for Cyber-Physical Human-Centric Systems in smart homes. In [31], the correlation of personal lighting comfort model factors have been analyzed in Cyber-Physical Human-Centric Systems. This experiment was carried out in winter. The Royal Chartered Building Equipment Association (CBBSE) [32] has provided indoor environmental design specifications to meet thermal comfort, which recommend that the suitable temperature for an office in winter is 20-22 • C, and the humidity should be less than 50%. According to the Chinese standard GB 50019-2003 [33], the standard temperature for comfortable working in winter should be in the range of 20-24 • C, and the relative humidity should be in the range of 30-70%. Generally, when the temperature is 16-25 • C and the relative humidity is 30-70%, there is little effect on the thermal sensation of the human body. According to the standard GB/T18049-2000, the Predicted Mean Vote (PMV) value of the evaluation index characterizing the human thermal response (heat and cold) should be −1 PMV +1. Based on the requirements of the above specifications for lighting environment and thermal comfort, we designed four different lighting environments. Parameters of the specific operating conditions are given in Table 2. Subjects We screened eight subjects who met the requirements of the experiment. Four males and four females with normal color vision and no eye disease participated. The experiment required the subjects to work and rest normally in the 24 h before the experiment, in order to ensure that they had received adequate sleep. No stimulating drinks, such as alcohol, coffee, or functional drinks, were consumed 5 h before the experiment. No kind of physical or mental exercise was performed 3 h before the experiment. The ISO7730, issued by the International Organization for Standardization (ISO), stipulates that the applicable conditions of the comfort standard are as follows: the person is sitting, engaged in light physical activity (Metabolic Rate M < 1.2 MET), and the clothing insulation is 0.5 clo in summer and 1.0 clo in winter. In this experiment, the clothing insulation of the subject was set to 0.9-1.04 clo, which basically met the international standard, and a metabolic level of 1.0 MET was assumed. In addition, the thermal insulation of the seat selected in the experiment can be ignored. Subject information is provided in Table 3. EEG An electroencephalograph (EEG) is the sum of the spontaneous electrical activity of pyramidal cell dendrites on the cerebral cortex, which can effectively reflect brain fatigue and the state of cerebral cortex excitement [34]. Therefore, in this study, EEG signals were selected as evaluation indicators reflecting the effects of different lighting environment parameters on fatigue. 2. HRV Heart rate variability (HRV) is a set of quantitative indicators reflecting the activity of the autonomic nervous system. It is an effective indicator of the body's mental and physical fatigue and workload [35]. Eye Movement The human eye is the organ that perceives light most directly. Data related to changes in eye activity, such as pupil diameter, number of fixation points, and distribution area, can intuitively reflect the fatigue state. Subjective Methodology Subjective methods used in this study included a subjective fatigue scale and task evaluation. The subjective scale method requires subjects to answer questions in the table according to their own perceived state. This method provides a visual representation of the subject's true state [36]. Metering Operation Method The metering operation method is a commonly used method for testing work efficiency. Moreover, work efficiency is an important indicator that reflects the visual effect. Common visual task scales include letter recognition, image recognition, and so on. For this experiment, we adopted the original material design. Through analysis of the test error rate of the subjects, their work efficiency can be judged, reflecting the fatigue degree of computer work under the various lighting environments [36]. Equipment In the experiment, a 16-channel EEG cap was used to collect EEG, where the EEG signal acquisition method was semidry electrode measurement. A physiological information collection multimodule was used to collect the ECG signal. The use of an eye tracker is common for eye movement recording. It is able to record eye movement information while, at the same time, overlaying eye movement information with scene images, allowing for comparison of the experimental data with the real scene. The specific parameters of the physiological signal acquisition equipment are given in Table 4. Analysis Platform For data collection and processing, we mainly used the Ergo-LAB human-machine environment synchronization test cloud platform, which has been widely used for experimental design, data collection, analysis and statistics, and human behavioral research. Experimental Materials The letter distinguishment test used random English letters and a target letter. The subject was required to judge whether the target letter was included in five random letters within the specified time, as shown in Figure 2. In such a test, the participant's mental resources may be quickly exhausted, due to the constant refreshment of their short-term memory, thus providing an indication of their fatigue state. • E-word Memory Test The E-word memory test, shown in Figure 3, requires the subject to count the number of target items out of a given combination of target and interference items. All of the target items that were wrongly selected, selected multiple times, or missed were judged as wrong. In this test, the intensity of daily computer work can be simulated by eliminating interference and maintaining short-term memory. Subjective Fatigue Questionnaire The subjective fatigue questionnaire contained eight fatigue status items, including vertigo, headache, and so on. Subjects provided scores for these items individually, where a higher score represented a deeper fatigue state. The evaluation scale is shown in Figure 4. Experimental Process The experimental process was divided into three parts: Experimental Preparation, Experimental Implementation, and Experimental Transition. The experimental process is depicted in Figure 5. 1. The experimenters and participants first carried out relevant preparatory work before the experiment. The preparation process included an explanation of the experiment process and an introduction to the experimental equipment. 2. Participants performed the task test for 10 min. After that, the subjects were required to fill in a questionnaire according to their subjective feelings towards the lighting environment. 3. After a 10 min intermission, the subjects entered different working conditions for the next round of the experiment. Subjective Questionnaire The average value of the eight subjective evaluation indices was calculated as a fatigue index for each subject under different working conditions, as shown in Figure 6. The fatigue index size distribution under different working conditions was relatively uniform. However, subjective fatigue was lower under C3-I5 (3000 k + 500 lx) and C4-I5 (4000 k + 500 lx). It can be speculated that the low color temperature gave the subjects a sense of comfort, while the high color temperature and high illumination led to a bright feeling. However, the subjective data showed great interpersonal differences; so, further analysis of objective data is required. EEG EEG has high time resolution and excellent characteristics in the frequency domain. According to the physiological characteristics of EEG in different frequency bands, it can be divided into α, β, γ, δ, and θ waves [37]. The EEG frequency classification is shown in Table 5. Electroencephalography topographical representation (ETR) can reflect changes in brain function and express color intensity changes; using this, a percentage topographic map of the ratio between different brain wave frequency band combinations can be obtained. In this experiment, the ETR of the subject at rest was compared with that after the task test. As shown in Figure 7, the change in state after the task test had a significant effect on the energy distribution of the different rhythm waves in the EEG. We further analyzed the proportion of rhythmic waves and the proportion of fast and slow waves to measure the degree of fatigue. Studies have shown that, when the cerebral cortex is inhibited, the slow-wave content gradually increases in the fatigue state and the θ wave appears [38]. As stated in [39], an increase in θ and δ waves indicates a serious degree of brain fatigue. The characteristic parameter of the basic rhythm of EEG, the Frequency Band Energy Ratio (FBER, hereafter referred to as R), represents the proportion of δ, θ, α, and β waves in the total wave. The magnitude and change of R can be used to effectively judge the degree of brain fatigue. The frequency band energy ratio R was obtained using Equations (1) and (2) as follows: where j is any frequency band of δ, θ, α, and β waves (db); E (j) (k) is the power value of the frequency band (db); E all (k) represents the total power value of the four bands (db). After sorting out the data, the distribution of the proportion of each wave in the EEG to the total wave was obtained, as shown in Figure 8. From Figure 8, it can be seen that the frequency band energy ratio of δ waves had an average band energy ratio of 71.76%, being overwhelmingly dominant in the total wave. This indicated that subjects began to experience varying degrees of fatigue after the task test. The fatigue-related θ band energy ratio was the highest under C3-I3 (3000 K + 300 lx), approximately 10% higher than under the other working conditions. Under C3-I5 (3000 K + 500 lx), the θ wave content was 20.59% and β waves were the least abundant. The total percentage of β and δ waves was also the smallest; however, the percentage of γ waves, which are associated with alertness, was as high as 12.92%. Table 6 shows the energy ratio of slow and fast waves in the EEG. α waves are the most basic rhythm wave in the brain. So, the key factor in determining the energy ratio is the content of θ and β waves. All of the rhythm energy ratios were relatively large under the environment of C3-I3 (3000 K + 300 lx), especially the (α + θ)/β ratio, which reached up to 12.18. We found that the β wave content decreased, while the θ wave content increased significantly. The results indicate that computer work fatigue was the most obvious under the C3-I3 (3000 K + 300 lx) working conditions. HRV The HRV index system is usually divided into the time and frequency domains. The mean interbeat interval (MeanIBI), Standard Deviation of IBIs (SDNN), and Root Mean Squared Difference of Adjacent IBIs (RMSSD) in the time domain decreased with an increase in cognitive load and fatigue; meanwhile, the low-frequency power (LF) and the ratio of low-frequency power to high-frequency power (LF/HF) in the frequency range increased. To the contrary, fatigue can cause a rapid heartbeat (High Mean HR) and even arrhythmia. HRV Time-Domain Index Time-domain analysis is a linear analysis considering the signal as a function of time. The time-domain indicator was calculated according to Equations (3)-(5): The time-domain indicators in HRV were extracted based on the statistical analysis of the RR sequence interval, which is the time interval between the peaks or troughs of two adjacent R waves. The HRV time-domain indicators of the eight subjects under the four lighting environments are given in Table 7, and the trends of these indicators with the change in lighting environment are shown in Figure 9. We selected the ratio of low to high frequency (LF/HF) in order to measure the fatigue degree; the results are provided in Table 9. According to the LF/HF values in Table 9, it can be seen that the values at 3000 K are generally higher than those at 4000 K, and the LF/HF values at 3000 K fluctuate greatly, indicating that the fatigue degree is more obvious at 3000 K. Thus, it can be assumed that the color temperature has a greater impact on fatigue. Pupil Diameter Previous studies have shown that, in states of alertness or high concentration, the pupil diameter remains relatively constant. On the other hand, in a state of fatigue, the pupil diameter tends to decrease and the oscillation of pupil size increases [40]. This spontaneous pupil behavior has a strong correlation with human fatigue. In this study, a moving window smoothing algorithm was used to denoise the pupil diameter in order to minimize the influence of noise on the signal. The pupil data obtained were processed into a timeseries; the result is shown in Figure 10. As shown in Figure 10, the pupil diameter under C3-I3 (3000 K + 300 lx) has a large oscillation amplitude. Meanwhile, it reached a minimum of 3.2 mm under C3-I5 (3000 K + 500 lx). At 4000 K and both 300 lx and 500 lx, the pupil diameter varied very little and remained within a certain range. This is because a low-CCT environment can cause instability of the human body, leading to rapid fatigue deepening. Fixation Point The dispersion of fixation expands while the number of fixation points decreases with fatigue. The gaze point trajectories of the eight subjects under different lighting environments were superimposed. As the gaze points for test 1 (Letter distinguishment test) and test 2 (E-word memory test) were different, they had to be analyzed separately. The difference in gaze point trajectory between C3-I5 (3000 K + 500 lx) and C4-I3 (4000 K + 300 lx) was small; so, we selected C3-I3 (3000 K + 300 lx) and C4-I5 (4000 K + 500 lx) with a larger lighting environment span for further analysis, as shown in Figures 11 and 12. In both test 1 and test 2, the gaze point trajectory graph analysis results showed the same trend. Under C3-I3 (3000 K + 300 lx), the degree of gaze dispersion was obviously greater than C4-I5 (4000 K + 500 lx), proving that the fatigue level was lower under C4-I5 (4000 K + 500 lx). Task Test Results To better visualize the relationship between subjective fatigue and the task test error rate, the two were analyzed together. The corresponding results are shown in Figure 13. Although the decreasing trend in subjective fatigue values was not particularly pronounced (only around 0.093), the error rate of the task test under C3-I3 (3000 K + 300 lx) reached a peak value of 12%, while a minimum (5.3%) was achieved under C4-I5 (4000 K + 500 lx), with a decrease of up to 7%. The task test error rate and subjective fatigue reached a minimum under C4-I5 (4000 K + 500 lx), with values of 5% and 1.4321, respectively. As the color temperature and illumination increased, the error rate in the computer task test decreased and the fatigue of the subjects was relieved, proving thatwithin a certain range of variation-the subjects preferred to perform computer work in an environment with higher color temperature and illumination. Discussion In this research, we mainly used artificial light source equipment to provide lighting. Thus, the influence of daylight fluctuation on fatigue was not perfectly investigated. The study of lighting environments combining natural and artificial lighting should be able to provide a better exploration effect. Moreover, this experiment lacked a more detailed division of illuminations and color temperatures, and also lacked a more detailed analysis of the subjects. For better understanding of the effect of the indoor lighting environment on computer work fatigue, further work under different control scenarios would be useful. Future research should consider two key aspects. First, more lighting environment factors should be considered, as it remains unknown which of the light factors are the most relevant to human fatigue. Secondly, the impact of the subject's physical factors, such as age and degree of myopia, on the experimental results should be taken into account. In this study, all of the participants, except for one, were affected by myopia; however, no indication was provided about the degree of this impairment. Consequently, the results cannot provide further evidence regarding the possible relationship between lighting conditions and fatigue in this respect. In summary, in future research, participants from different age groups should be considered and different combinations of natural and artificial light could provide a more realistic experimental environment. Conclusions This research was devoted to exploring the influence of different indoor lighting environments on computer work fatigue. Four working conditions (C3-I3, C3-I5, C4-I3, and C4-I5) were set up. The physiological signal data, including EEG, ECG, and Eye Movement, of eight participants, combined with the results obtained from a subjective fatigue questionnaire and a task test under each working condition were collected in real time in order to reasonably quantify the computer work fatigue. Both the objective analysis of physiological signals data and the subjective analysis of questionnaires and task tests confirmed that fatigue was more likely to be induced in the C3-I3 environment. Improving the illumination and color temperature of the computer working environment can help to reduce fatigue and, thus, improve work efficiency. Therefore, the C4-I5 lighting condition is suggested for use in office spaces. The subjective fatigue under C3-5 and C4-I5 was low, indicating that an appropriate warm color temperature and brightness can lead to a comfortable and relaxed feeling, which is beneficial for relieving fatigue. Under the color temperature of 4000 K, increasing the illumination can significantly improve the performance of computer operators, which was reflected in a direct reduction in the error rate from 8.4% to 5.3%. The difference between LF/HF values under 3000 K and 4000 K was obvious, while the difference between various illumination gradients under the same color temperature was not obvious. Furthermore, compared with the 4000 K environment, the EEG rhythm energy ratio of different levels of illumination under 3000 K showed significant variations. The above conclusions indicate that color temperature has a significant impact on fatigue.
6,409.2
2022-06-01T00:00:00.000
[ "Engineering" ]
Evaluation of the relationship between plasma glucagon-like peptide-2 and gastrointestinal dysbiosis in canine chronic enteropathies Chronic enteropathies are a common cause of morbidity in dogs and are associated with disruption of the normal gastrointestinal mucosal barrier. The objective of this prospective study was to determine the association between measures of gastrointestinal dysbiosis and plasma concentrations of glucagon-like peptide-2, a hormone responsible for normal mucosal structure, in dogs with chronic enteropathies. Fecal 16S V4 rRNA gene sequencing and quantitative PCR via the dysbiosis index was performed on 16 healthy controls and 18 dogs with chronic enteropathy prior to and 1 month after initiation of individualized therapy. Fasting and post-prandial plasma GLP-2 concentrations were measured via ELISA in healthy dogs and chronic enteropathy dogs at both time points. Alpha and beta diversity indices, as well as bacterial population abundances were compared between groups and time-points. Principal component analysis combined with least squares regression was used to identify taxa contributing to glucagon-like peptide-2 variance among groups. While the dysbiosis index did not differ between healthy dogs and dogs with chronic enteropathy, 16S V4 genomic sequencing identified 47 operational taxonomic units that differed between the groups, all but 2 of which resolved following chronic enteropathy treatment. Principal component analysis identified 6 families and 19 genera that contributed to differences in glucagon-like peptide-2 concentrations between groups. Dysbiosis associated with chronic enteropathies in dogs may contribute to the observed lower plasma glucagon-like peptide-2 concentrations. Further research into mechanisms of microbiota impact on the enteroendocrine system is needed. Association between glucagon-like peptide-2 secretion and microbiome indices may help to guide research into future treatment strategies for dogs with chronic enteropathy. Introduction Chronic idiopathic enteropathy (CE) is a common disease of dogs typically classified by response to treatment (e.g., diet-responsive, immunosuppressant-responsive); although, its true pathogenesis is poorly characterized.The underlying etiology is likely multifactorial, conducted study with informed owner consent, which was obtained in writing at the time of enrollment [28].The CE population included 9 dogs with either lymphoplasmacytic enteritis or gastroenteritis, 3 dogs with eosinophilic enteritis, 3 dogs with histiocytic or granulomatous enteritis, 2 dogs with undefined disease (i.e., no histopathology performed), and 1 dog with neutrophilic enteritis.Lymphoplasmacytic colitis was diagnosed in two dogs with concurrent small intestinal inflammatory disease.Inclusion criteria for the CE group followed standard protocols to exclude systemic disease and overt neoplastic or infectious enteropathies, including abdominal ultrasound performed by a board-certified veterinary radiologist in all dogs [28].Testing for specific infectious etiologies (e.g, histoplasmosis), endocrine disease (e.g., hypoadrenocorticism), or pancreatic disease was performed at the discretion of the clinician managing the dog's case.Trypsin-like immunoreactivity was evaluated in 10 dogs.Healthy control dogs were adult dogs with a body condition score (BCS) 4-6 out of 9 [29], normal baseline blood work, and no history of GI disease or any medications aside from routine heartworm, flea, and tick preventatives within the previous six months.Neither group had received raw food or raw food treats in the 6 months prior to study enrollment.When possible, a twoweek trial with at least one novel protein or hydrolyzed diet was recommended prior to pursuing endoscopy; however, this was not a requirement of enrollment due to lack of some dogs' willingness to eat a single diet. Experimental design In all dogs, pre-prandial whole blood samples were collected for plasma GLP-2 measurement following a 10-15 hour fast.Blood sample handling was as previously described to prevent in vitro GLP-2 degradation [24,28].Fresh fecal samples were collected for microbiome analysis and dysbiosis index (DI).CE dog treatments were not standardized but rather determined by the clinician managing the dog's case (S1 Table ).Approximately 30 days after starting targeted CE therapy, study procedures and sample collection were repeated in the CE population.The CE population was divided into pre-treatment (CE-PRE) and post-treatment (CE-POST) groups for analysis.The Kansas State University IACUC approved all study procedures (Protocol 4479). Plasma GLP-2 measurement A commercially available canine GLP-2 competitive ELISA kit (Canine GLP-2 ELISA Kit; Kendall Scientific) was used to measure plasma GLP-2 concentrations, as previously described [28].In brief, after allowing plasma to thaw at room temperature for one hour, samples were analyzed in duplicate following manufacturer instructions.Fifty microliters of plasma were added per sample well (i.e., 100 μL total with duplication).A microplate reader (BioTek Epoch) was used to determine optical density at a wavelength of 450 nm immediately after the addition of stop reagent. Fecal sample collection, handling, and storage Fecal samples analyzed in this study were collected at the time of defecation, immediately refrigerated at 4.5-5˚C, stored at -80˚C within 12 hours of defecation, and shipped on dry ice for DI (Texas A&M Gastrointestinal Laboratory) and comprehensive fecal microbiome analysis (Microbiome Insights).Samples were stored for <6-24 months prior to analysis. Fecal dysbiosis index The fecal DI was calculated following quantitative PCR (qPCR) for total bacteria and specific bacterial taxa (i.e., Faecalibacterium, Turicibacter, Streptococcus, Escherichia coli, Blautia, Fusobacterium, and Clostridium hiranonis) according to standard laboratory protocol as a commercially available test (Texas A&M Gastrointestinal Laboratory).The DI was calculated using a previously described algorithm defining differences between healthy dogs and dogs with CE [9].Consistent with previous work, a DI <0 was considered normal and >2 considered dysbiosis [9]. Fecal DNA extraction and 16S rRNA gene sequencing Fecal DNA extraction and amplicon sequencing based on 16S V4 rRNA (Illumina MiSeq) was performed by Microbiome Insights in a College of American Pathologists accredited laboratory.Fecal samples were placed into a MoBio PowerMag Soil DNA Isolation Bead Plate, and DNA was extracted using a KingFisher robot per manufacturer instructions.Bacterial 16S rRNA genes were PCR-amplified with dual-barcoded primers targeting the V4 region (515F 5'-GTGCCAGCMGCCGCGGTAA-3' and 806R 5'-GGACTACHVGGGTWTCTAAT-3') [30].Amplicons were sequenced with an Illumina MiSeq using the 300-bp paired-end kit (v.3).Sequences were denoised, taxonomically classified using Silva (v.138) as the reference database, and clustered into 97%-similarity operational taxonomic units (OTUs) (Mothur software package v. 1.44.1)[31].The OTUs were then classified into taxonomic assignments.Sequencing quality was determined using FastQC 0.11.5 prior to classification and subsequent analysis. Template-free negative controls were co-sequenced with DNA amplified from samples using the same procedures to assess for possible contamination.A positive control from 'S00Z1-' samples consisting of cloned SUP05 DNA, was also included.An OTU was considered a contaminant and removed from analysis if the mean abundance in controls reached or exceeded 25% of the mean abundance in samples. Statistical analysis Dysbiosis index and qPCR.Data were tested for normality using the Shapiro-Wilk test; non-parametric analyses were used where data were not normally distributed.The Wilcoxon matched-paired signed-ranks test was used to compare the DI and logDNA of individual bacterial species between CE-PRE and CE-POST.The Mann-Whitney U test was used to compare the DI and logDNA of individual bacterial species between CE-PRE and HC.Bonferroni correction for multiple comparisons resulted in a P value of <0.003 for significance.Analyses were performed using commercial statistical software (GraphPad Prism v10.1.0). Fecal DNA extraction and 16S rRNA gene sequencing.Statistical analysis was performed by Microbiome Insights.An analytical flowchart is included in supplementary material (S1 Fig) .Alpha diversity was estimated with the Shannon index on raw OTU abundance tables.Shannon diversity was compared among groups (HC, CE-PRE, CE-POST) using an ANOVA, accounting for repeated measures and subsequent pairwise testing.To estimate beta diversity across samples, Bray-Curtis indices were computed after excluding OTUs with a count of less than 3 in at least 10% of the samples.Beta diversity was visualized using principal coordinate analysis (PCoA) ordination, emphasizing global differences in fecal microbial communities across samples.Variation in community structure was assessed with permutational multivariate analyses of variance (PERMANOVA) using treatment group as the main fixed factor and using 999 permutations for significance testing.Post-hoc pairwise testing was performed with FDR method correction for multiple comparisons.The Linda function from MicrobiomeStat was used to identify differentially abundant taxa using a linear model on centered log ratio transformed data.All analyses were conducted in the R environment (Version 4.1.2). GLP-2 and microbiome comparison.Analyses were performed using commercially available statistical software (GraphPad Prism v10.1.0).Principal component analysis was performed at the phyla, family, and genus levels on taxa identified as differentially abundant in the above analyses and logDNA of taxa included in DI analyses.A least squares regression model with GLP-2 concentration as the dependent (outcome) variable and parallel analysis as the component selection method was used.Regression coefficients were converted to the scale of the original variable.A value of P < 0.05 was considered significant. Evaluation of the fecal microbiome A total of 46 fecal samples [16 samples from HC dogs and 30 samples from a total of 18 CE dogs, 16 pre-treatment (CE-PRE) and 14 at follow-up (CE-POST)] were utilized for Illumina sequencing and qPCR assays.Of the 18 CE dogs, 2 dogs only had post-treatment fecal samples analyzed due to inadequate fecal sampling at baseline, and 4 dogs had only pre-treatment fecal samples analyzed due to study drop-out.The remaining dogs had both pre-and post-treatment samples analyzed.Illumina sequencing yielded a total of 1,282,880 sequences with an average of 19,935 quality-filtered reads per sample.The resulting dataset had 1760 OTUs, including singleton, which were divided into various taxa.A total of 6 phyla and 52 genera were identified (S2 Fig). 16S V4 rRNA sequencing Differential abundance analysis pre-/post-intervention revealed 47 OTU which differed significantly between CE-PRE dogs compared to HC.All but three OTU within the genera Discussion Through this study, we present the first association between plasma GLP-2 concentration and fecal microbiota populations in dogs with CE.We compared the fecal microbiome of dogs with CE prior to and following approximately one month of individualized treatment to that of healthy dogs.Through concurrent analysis of plasma GLP-2, we identified microbiota populations that contributed to variance in GLP-2 concentrations between CE dogs and HC.Results also highlighted limitations in commercially available assays to predict dysbiosis. Consistent with previous studies [5][6][7][8][9][10], both the DI, and associated qPCR, and 16S V4 rRNA analyses highlighted differences in GI microbiota in CE dogs compared to HC.In contrast to previous work [32], however, the utility of the DI to accurately reflect dysbiosis in CE dogs and differences between CE dogs and HC was limited in this study.While there was no significant difference in DI index value between groups, numerically the median index value was lower (i.e., more normal) in HC dogs, and the lack of statistical difference was therefore likely reflected by small sample size and multiple comparisons.Interestingly, three HC dogs had a DI between 0-2, reflecting a grey zone or mild dysbiosis value, and one HC had DI >2, reflecting dysbiosis.None of those dogs had any abnormalities in individual taxa.When using the DI as a diagnostic for dysbiosis in CE dogs, only 5 out of 16 index values were elevated at baseline.When evaluating individual taxa, the decreased Turicibacter abundance at baseline in CE dogs is consistent with known disturbances in dogs with GI disease [9].Notably, none of the individual taxa abundances significantly changed in CE dogs between baseline and recheck evaluation; although, Blautia tended to decrease at recheck, which contrasts with what one would expect for resolution of dysbiosis.Furthermore, 8 out of 14 index values were consistent with mild or overt dysbiosis at study reevaluation.The bacterial groups that contributed to the shifts in DI varied among individual dogs, which is likely why group-wide shifts in bacterial abundances were not appreciated.The minimal change or worsening in DI occurred despite the clinical improvement documented through the canine inflammatory bowel disease activity index (CIBDAI) and canine chronic enteropathy activity index (CCECAI) in most dogs [28].This underscores the importance of considering factors that could impact the microbiome, such as antimicrobials and other medications used for CE treatment, in addition to GI disease status when utilizing the DI as a monitoring tool.Though, a normal DI in over 30% of CE dogs has been reported in previous studies, as well [5-7, 32, 33]. In contrast to the few microbiome differences detected between CE dogs and HC through DI analysis and qPCR, numerous shifts in bacterial taxa were noted in the CE-PRE group compared to HC through 16S V4 rRNA analysis, predominantly reflected in decreased OTU abundances.As only two OTUs differed between CE-POST and HC groups, 16S V4 rRNA analysis suggested an improvement in dysbiosis not reflected by DI analysis.16S V4 gene sequencing identified significant decreases in Faecalibacterium, Turicibacter, and Fusobacterium; whereas only decreased Turicibacter abundance was identified via qPCR analysis.This likely reflects the inherent differences in methodology between qPCR assays and 16S V4 rRNA gene sequencing; though, differences related to the region of the 16S gene that is targeted should also be considered.degree of increase (blue) or decrease (orange) of OTU differential abundance in the chronic enteropathy group compared to controls.https://doi.org/10.1371/journal.pone.0305711.g001Importantly, and relevant to the results of this study, 16S V4 rRNA sequencing has advantages in identifying microbiota that contribute to disease manifestation, particularly in the early stages of exploring a disease process.Using 16S V4 rRNA sequencing, this study demonstrated positive associations on circulating GLP-2 concentrations with increasing relative abundance of 10 genera and 6 families.All 10 genera (Catenibacterium, Erysipelotricha-ceae_UCG-003, Faecalibacterium, Holdemanella, Lachnospira, Megamonas, Peptococcus, Pre-votella_9, Pygmaiobacter, Turicibacter) were significantly decreased in CE-PRE.A significant negative association with GLP-2 concentration was observed with abundance of 8 genera, including Escherichia-Shigella, Clostridioides, and Anaerostipes, the only 3 genera that were significantly increased in CE-PRE.These findings suggest that increases in plasma GLP-2 following CE treatment are associated with a trend toward normobiosis.Worth noting is that all 5 of the bacterial groups that were both significantly decreased in dogs with CE-PRE and negatively associated with GLP-2 concentrations (Blautia, Lachnospiraceae, Butyricicoccus, Erysipelatoclostridium, and Erysipelotrichaceae) are butyrate-producing bacteria within the phylum Firmicutes [34,35].This was an unexpected finding given that butyrate strongly stimulates GLP-2 secretion [36].As expected, other SCFA-producing bacteria, such as Faecalibacteria, Turicibacter, Ruminoccaceae, and Lachnospira, showed strong positive associations with GLP-2 concentrations and were significantly decreased in CE-PRE dogs.These findings suggest that the latter populations may more highly impact enteroendocrine responses in dogs compared to those in the above-mentioned butyrate-producing species.However, it is also possible that the specific metabolic by-products are more important for GLP-2 secretion than individual microbiota.Metabolome analyses for SCFA in conjunction with GLP-2 concentrations could more clearly define this relationship. Clostridium hiranonis, currently known as Clostridioides hiranonis, is a bile acid converter commonly associated with diet-responsiveness in CE dogs [10,37].Although C. hiranonis was not significantly different between CE-PRE and CE-POST samples or between CE dogs and HC based on DI analysis, this was the most common bacterial species outside the normal relative abundance reference range in either CE-PRE (n = 4) or CE-POST (n = 6) fecal samples.Antimicrobial administration has been associated with decreased fecal secondary bile acids due to the high antimicrobial sensitivity of C. hiranonis [38,39], and three dogs in our study received antimicrobials as part of their CE treatment [28].All three dogs had normal C. hiranonis abundances at presentation but severely decreased concentrations at follow-up (range, 0.1-1.5 logDNA [reference interval 5.1-7.1]).As intraluminal bile acids stimulate GLP-2 secretion from L-cells [15], lack of C. hiranonis normalization could explain the lack of complete GLP-2 normalization in CE-POST dogs.This cannot be confirmed, however, as none of the OTU analyzed via qPCR were significantly associated with variance in GLP-2 concentrations among groups.Further, metabolome analysis for secondary bile acids would be needed to confirm this relationship in dogs. Loss of the commensal C. hiranonis has been associated with increased pathogenic species such as C. difficile, C. perfringens, and E.coli [37].Potentially consistent with this observation, 16S V4 rRNA gene sequencing revealed an increase in Clostridioides in CE-PRE dogs compared to controls, which resolved with treatment.Quantitative analysis for distinct bacterial species would be required to further define the populations within this genus that contributed to the change.Escherichia-Shigella was the only genus that remained significantly increased in CE-POST dogs.Escherichia-Shigella has been negatively associated with GLP-2 secretion in mouse models [40], consistent with our findings in dogs.Persistent elevation of Escherichia-Shigella in CE-POST dogs may partially explain failure of GLP-2 to normalize in these dogs despite significant improvement from pre-treatment GLP-2. This study was limited by its small sample size, which was not recruited specifically for the objectives of this study, but rather to determine differences in GLP-2 concentrations among groups with the expectation that shifts in microbiota would be related to GLP-2 changes.Therefore, results may underestimate significant changes in the microbiome associated with disease, treatment, and circulating GLP-2.This may be particularly relevant where P values adjusted for multiple comparisons did not demonstrate a significant difference where a standard P value of <0.05 would have demonstrated a statistically significant difference (e.g., lack of statistical difference in Blautia spp.abundances between CE-PRE and HC).These analyses might demonstrate significant differences in a larger study population.Lack of standardized treatment may likewise differentially affect the microbiome, especially in the three patients that received antimicrobials, which had some of the highest increases in post-treatment DI, rather than normalization.As type and severity of GI disease influence fasting and post-prandial GLP-2 secretion patterns in humans with IBD [22][23][24][25][26][27], including a range of disease in our population, rather than focusing on dogs with ileal disease, may have limited the ability to observe relationships between microbiota and GLP-2 concentrations.However, dogs have a higher concentration of L-cells more proximally in their GI tract (i.e., jejunum) than other species; therefore, it may not be possible to extrapolate disease localization and impact on GLP-2 secretion among species.Inclusion of larger numbers of dogs with individual disease subsets, such as protein-losing enteropathies, and implementation of standardized treatments are considerations for future studies.However, this population of dogs more accurately represents those encountered in a clinical setting, making findings more applicable to the canine CE population as a whole.It is also possible that the duration of the study did not allow sufficient time for complete resolution of dysbiosis.One study showed that the microbiome of CE dogs still differed significantly from controls despite clinical remission after 8 weeks of treatment [7].At a 1-year follow-up, no difference was observed between CE dogs and HC.Extending the follow-up time may result in a stronger correlation between dysbiosis and GLP-2 secretion, particularly as GLP-2 had also not normalized at the 30-day follow-up [28].Finally, the methods used to assess the GI microbiome may have underrepresented certain bacteria that play an important role in GLP-2 secretion.For example, Akkermansia muciniphila is a mucin-degrading bacterium that resides in the mucus layer, making it unlikely to be accurately represented via fecal microbiome analysis.In mouse models, A. muciniphila has been shown to increase intestinal concentrations of 2-oleoglycerol, which stimulates GLP-2 secretion from L-cells [17,18].While A muciniphila has not been documented as a component of the normal canine GI microbiome, other mucin-degrading bacteria could contribute to this role in dogs.Despite its limitation in identifying these populations, fecal microbiome analysis has the benefit of being non-invasive and is a routinely accepted method of microbiome description in dogs.Lastly, microbiome studies vary in analytical approach.While we chose to evaluate microbiota clustering through OTUs and focus on abundance differences through Bray-Curtis dissimilarity analysis, assessments using amplicon sequence variants or phylogenetic relationships through UniFrac distance analyses could result in differences in diversity outcomes or identify functionally related microbiota groups of importance. Conclusions Overall, this study demonstrated both positive and negative correlations between circulating GLP-2 concentrations and specific fecal OTUs in dogs.The association between increased plasma GLP-2 and normalization of dysbiosis lends additional support for the microbiome as a target for canine CE management.However, this study highlighted the limitations of commercially available assays for dysbiosis description.At the present time, their use for disease exploration in this area as an alternative to 16S V4 rRNA sequencing methods cannot be recommended.Future studies targeting the genera identified in this exploratory study, as well as metabolome analyses, are warranted.Correlating findings to assessment of mucosal barrier function would be particularly useful in determining a potential therapeutic benefit to CE dogs. Fig 1 . Fig 1. Fecal 16S V4 rRNA differential abundance testing in dogs with and without chronic gastrointestinal disease.Panel (A) represents the log fold changes in OTU differential abundances identified by 16S V4 rRNA genomic sequencing as significantly different between 16 dogs with untreated chronic enteropathy and 16 healthy controls.Panel (B) represents significantly different OTU abundances (log fold change) in 14 chronic enteropathy dogs after 30 days of individualized gastrointestinal disease treatment (CE-POST) and healthy controls.Each bar denotes either Fig 2 . Fig 2. Boxplot of fecal microbiome Shannon diversity indices does not demonstrate differences between healthy dogs and dogs with chronic enteropathies.Mean and standard deviation Shannon index value estimates of fecal alpha diversity as measured by 16S V4 rRNA genomic sequencing in 16 healthy control dogs (HC), 16 untreated chronic enteropathy (CE-PRE) dogs and 14 CE dogs after 30 days of individualized gastrointestinal disease treatment (CE-POST).No significant differences (P > 0.05) were noted between groups.https://doi.org/10.1371/journal.pone.0305711.g002 Fig 4 . Fig 4. Principal component analysis (PCA) of the fecal microbiome using qPCR based on disease status.Fecal bacterial abundances (logDNA) detected by qPCR through dysbiosis index analysis were used to detect differences between 16 healthy dogs (green) compared to 16 dogs with untreated chronic enteropathies (CE-PRE; blue) and 14 CE dogs after 30 days of individualized gastrointestinal disease treatment (CE-POST; red).A least squares regression model performed on PCA with GLP-2 concentration as the outcome did not identify any taxa with significant contribution to GLP-2 variance.https://doi.org/10.1371/journal.pone.0305711.g004 Fig 5 . Fig 5. Principal component analysis (PCA) and biplots of fecal 16S rRNA gene sequencing in dogs with untreated chronic enteropathies (CE-PRE; blue), after 30 days of individualized chronic enteropathy treatment (CE-POST; red), and healthy dogs (green).Panel (A) represents Family level analysis.Panel (B) represents Genus level analysis.A least squares regression model performed on PCA with GLP-2 concentration as the outcome identified 6 families and 10 genera that positively contributed to GLP-2 variance.Nine genera were identified that negatively contributed to GLP-2 variance.https://doi.org/10.1371/journal.pone.0305711.g005 Table 1 . Dysbiosis index and qPCR. Chronic Enteropathy Healthy Control P value PRE POST CE 1 PRE vs POST CE 1 PRE vs HC Anaerostipes, Clostridioides, Escherichia-Shigella exhibited a relative decrease in abundance in CE dogs compared to HC.At follow-up, only 2 OTU significantly differed in abundance from HC in CE dogs, including a relative decrease in Pygmaiobacter and a relative increase in Escherichia-Shigella (Fig 1; S2 Table).
5,108.6
2024-06-27T00:00:00.000
[ "Medicine", "Biology" ]
Some Properties for the American Option-Pricing Model In this paper we study global properties of the optimal excising boundary for the American option-pricing model. It is shown that a global comparison principle with respect to time-dependent volatility holds. Moreover, we proved a global regularity for the free boundary. Introduction It is well-known that, for the American option-pricing model, there is an optimal holding region for contracts holders (see [1][2][3][4][5]).The part of the boundary for the region is unknown (free boundary), which is often referred as the optimal excising boundary for option traders.This free boundary has to be calculated along with the option price of the security.The mathematical model for the problem is highly nonlinear and there is no explicit solution representation even when volatility and interest rate are assumed to be constants (see [4]).On the other hand, for the financial world as well as for the intrinsic interest itself, it is extremely important to find the location of the free boundary along with the option price of the security.Particularly, people would like to know how the price of a security changes near the option expiry time since it may change dramatically [6,7]. During the past few decades, there are many research papers concerning for various option-pricing models.There are several Monographs devoted to this topic (see, for examples, [1,3,4,8]).For the American option model as well as its generalization, the existence and uniqueness are studied by many researchers ( here just a few examples, [2,5,9-12]).A basic fact is that the American option-pricing model can be reformulated as a variational inequality of parabolic type.Hence, many known results about existence and uniqueness can be applied to the model.However, the disadvantage of the method is that there is no information about the free boundary.To overcome the shortcoming, several authors employed other methods to establish the existence and uniqueness for the problem (see [7,[13][14][15][16][17]).Because of the practical importance, many researchers paid a special attention to the asymptotic behavior for the free boundary near the expiration time(see [6,[18][19][20][21][22][23][24][25]).Moreover, various nu-merical computations for the location of free boundary are also carried out by many people (see, for examples, [14,[25][26][27][28] and the references therein).More recently, some global property of the free boundary attracts some interest.The authors of [29,30] proved that the free boundary is convex if the volatility in the model is assumed to be a constant.However, this global property is not valid in the real financial market since the volatility depends on time and other economical factors.When the volatility depends on time and the security, the problem becomes much more challenging.In this paper we would like to study some global property of the free boundary.We want to find how the optimal exercising boundary changes when the volatility changes during the life-time of the option contract.This question is very important for structured products in the financial world. We first recall the classical model for the American option-pricing model with one security or one type of asset.Let   , V s t be the option price for a security such as a stock with price s at time .Then it is well- known that satisfies the Black-Scholes equation with no dividend [31,32]: where is the interest rate and r  represents the market volatility of the stock, is the region defined below.   , V s t  0, called the optimal exercising boundary. On the free boundary   = s S t , we know from the continuity of the option price that satisfies: where K is the striking price.We also know the payoff value at the terminal time once the striking price is given: For later use, we introduce : where In financial markets, the volatility  plays a major role for the option pricing model.Option price often changes dramatically when the stock market is in a chaotic movement.This was the case when the flash-crash happened on May 6, 2010 as well as the case on Oct. 19, 1987.On the other hand, for a relatively stable market, the volatility mainly depends on time.This is particularly true for an index fund such as S&P500 index in the U.S. market.Hence, we assume that throughout this paper.Our question is how the free boundary changes when the volatility   t  changes during the life-span of the option contract.We show that there is a global comparison principle for the free boundary with respect to the change of volatility .Moreover, a global existence result is also established as a by-product.Our proof is based on the line method (see [15]), which is different from existing literature (see [21,13] and the references therein).Although the existence of a solution for the problem is already known, our method does have several advantages.One of them is that the free boundary is determined along with the option price at each discrete time simultaneously.Moreover, a global regularity for the free boundary is also obtained.To author's knowledge, this regularity result is new and optimal (see [19,21,12]). t   The paper is organized as follows.In Section 2, we construct a sequence of approximation solutions by using the line method.After deriving some uniform estimates, a global existence is established.Moreover, an optimal global regularity for the free boundary is also obtained.In Section 3, we first derive some comparison properties for the approximation solution and then show that the limit solution preserves the same property.Some concluding remarks are given in Section 4. Remark 1.1:After this paper is completed, the author le Existence and Uniqueness ased on the discrete owing conditions are always assumed throughou arned that E. Ekströn proved a result in [33] (2004) about the monotonicity of option price with respect to volatility.However, there is no result about the comparison result for the free boundary.Moreover, the method in [33] is totally different from ours here.In addition, we also present a regularity result for the free boundary. Since our argument in Section 3 is b problem, we give the complete details about the construction of the approximation solution sequence.We also show that the approximation sequence is convergent to the solution of the original problem (1.1)-(1.5).As a byproduct, an optimal regularity of the free boundary is obtained. The foll t this paper. Now we construct an approximate solution sequence by teger.Divide using the line method. Let N be a positive in If we use difference quotient to approximate and This leads us to define the approximate solution   n V s and n S as follows: m the terminal conditio Fro n, we know So we define . Suppose we have obtained and where w exte d n  It see th e above free boun ary problem (2 erpolation to define the free bo for each n .Actually, since the problem is ional one n find the solution   n V s and n S explicitly (see [4] for detailed calculation Now we use the int undary   N S t as follows:   We also is convergent to the solution of the dary problem (1.1)-(1.5).To this end, we need to derive some uniform estimates. Lemma 2.1: Proof the definition, we see if T. Suppose we have shown that then at this minimum point, we see which contradicts the right-hand side of the Equation (2.1).It follows that other hand, we We assume that   n, suppose that .The   V s attains at an interior poin where depends only on known data, but not on his estimate is similar to the energy esti te r a parabolic equation.Indeed, we introduce new variables: = ln , = . Then the original free boundary problem .1)-(1.5) is eq (1 uivalent to the following one: where Lemma 2.4: There exists a constant su that On the other hand, by the definition we know It follows that Thus, . Now we can extend , we use the continuity of where if (2.9) a classical parabolic equation (see [34], estimate (5.15) 137) a n the desired energy estimate.By the definition, we see clearly that where depends only on known da , but not on .Proof: Note that is uniformly Lipschitz con- . We may assume that   It follows that satisfies the following equations: (2.10) (2.11) The maximum principle yields that where depends only on the known data an From the boundary condition (2.5), we see From the Equation (2.4) and the boundary conditions (2.5) and (2.6), we see which is uniformly bounded.By differentiating Equation (2.4) with respect to x   1 = 0, , . where depends only on known data and the 138).The uniq s llows from the variational inequality.Moreover, regularity theory for parabolic equation implies that Moreover, since the coefficients of the Equation (2.4) depends only on  , we use the interior re ularity of para g bolic equations to conclude that To see the regularity of the free boundary, we use Lemma 2.5 to see Hence, by Ascoli-Arzela's lemma, we can extract a subsequence, still denoted by , such converges to a function, denoted by   S t .Moreover, where C depends only on kn ata   own d , and p  .Now we convert back to the origin variables to conclude that al To see more regularity for , we use the boundary condition (2.5)-(2.6).Ind the condition (2.5)-(2.6),we see From the Equation (2.4) we obtain Now we consider the free boundary problem for It is easy to see that a unique solution exists with oblem (1 To prove the theorem, we show that the c mparison property holds for the discrete solution und r certain co o e ndition. Lemma 3.1: Proof: If necessary, we may use an approximation to replace by a smooth c vex function on Without loss of generality, we may simply assume     .Th from the regularity theory, we know th From the maximum principle, we see that . We differen = tiate Equation (2.1) for with respect to = n N n to obtain: The maximum principle implies that , , t S (1.5)  a When the volatility is a constant, it has been known for a long time that the option price is b e volatility is bigger.However, when the of time,   = t   nor the optimal excise latility changes for the whole time period   0,T .In this paper we answered such a question.We show that a comparison property for option price and the optimal excising bound (Theorem 3.1) when the volatility . This result is important for option traders.Moreover, we proved a global regularity result for the free boundary by using a very different method from the existing literature.ledgements Some results in this paper were reported at the international conference "Problems and Challenges in Financial Engineering and Ri ary hold Acknow sk Management" held in Tongji 1.The author would like to an, Professor Xinfu Chen, all, New York, 2008. which are exactly the same as for on page and the bound depends on known data and  .Now we can m is uniformly b nded.One can also use the same argument for W to conclude the es- After a f r of steps, we obtain the desired result of Lemma 3.1.Since we are interested in the relation between Q.E.D. Now we are ready to prove the main theorem in this section. Properties of Free Bo dary .
2,640.4
2012-08-31T00:00:00.000
[ "Mathematics" ]
Revisiting Additive Consistency of Hesitant Fuzzy Linguistic Preference Relations : Consistency has always been a hot topic in the study of decision-making based on preference relations. This paper focuses on the consistency of hesitant fuzzy linguistic preference relations (HFLPRs). Firstly, a new definition of the additive consistency of HFLPRs is given. Secondly, to examine whether an HFLPR is additively consistent, two equivalent programming models are constructed. Thirdly, for inconsistent HFLPRs, the corresponding consistency improvement model is further proposed, where only upper triangular elements in the HFLPRs are considered in view of the symmetry of HFLPRs. Using the consistency improvement model, an inconsistent HFLPR can be adjusted to the consistent one, which retains the original information as much as possible. Fourthly, a hesitant fuzzy linguistic weight vector is introduced and a programming model is constructed to derive the weight vector. Finally, the feasibility and effectiveness of the proposed method are illustrated by numerical examples and comparative analysis. This result demonstrates that the consistency model proposed considers each element of HFLPRs such that the consistent HFLPRs derived fully retain the original information. Moreover, only some preference values in the HFLPR are adjusted, and no preference value is out of range of the predefined HFLTSs. Introduction Torra [1] proposed the concept of hesitant fuzzy sets (HFSs), which allows several values between [0, 1] to indicate its membership. In reality, decision-makers (DMs) may show hesitant preference for alternatives on account of various factors in the decisionmaking process. Hence, HFS has been widely used as a tool to express DMs' hesitation. Due to the complexity of real decision-making situations, DMs may have difficulty in assigning appropriate numerical values to express their preferences. In this case, DMs tend to use linguistic terms rather than numerical values to express preferences. Due to the above considerations, Rodriguez et al. [2] proposed the hesitant fuzzy linguistic term sets (HFLTSs), which allow DMs to express their preferences by using several possible linguistic variables. Therefore, with the advantages of both HFSs and fuzzy linguistic sets, HFLTSs enable DMs to express their preferences more flexibly and conveniently in decision-making. Based on HFLTSs, Rodriguez et al. [2] developed the concept of hesitant fuzzy linguistic preference relations (HFLPRs) as a tool to deal with DMs' hesitant degree of preference for several possible linguistic terms over the paired of alternatives. Recently, more and more research [3][4][5][6][7][8][9][10][11][12][13][14] on HFLPRs has been performed. Dong et al. [3] proposed a new distance formula to measure the distance between two HFLTSs and developed the consensus level measure of HFLPRs. Song et al. [4] proposed a definition of multiplicative consistency of HFLPRs. Tang and Meng [5] introduced some definitions of multiplicative hesitant fuzzy linguistic preference relations (MHFLPRs) and corresponding consistency definitions. Wu et al. [6] proposed a new formula to measure the similarity between HFLTSs and a consensus improvement process based on the local modification mechanism. Tang et al. [7] proposed a new definition of interval linguistic hesitant fuzzy preference relations (ILHFPRs) and a concept of additive consistency, and then, Tang et al. [8] proposed a new definition of multiplicative interval linguistic hesitant fuzzy preference relations (MILHFPRs) and discussed the consistency issue. Chen et al. [9] considered the worst consistency (WCI) of HFLPRs and constructed two models to improve the WCI index and consensus level. Zhang and Chen [10] proposed a group decision-making (GDM) method based on the acceptable multiplicative consistency and consensus of HFLPRs. Ren et al. [11] proposed a kernel-based algorithm and a consensus measure for HFLPRs. Zheng et al. [12] proposed hesitant degree and fuzzy degree function of HTLTSs and established a model to normalize different lengths. Xu et al. [13] proposed a new AHP method and some models to improve the consistency and consensus levels. Li et al. [14] proposed a model to obtain the incomplete elements for an IHFLPR and an iterative algorithm to reach consensus. Consistency is regarded as an important indicator for DMs to avoid illogic when comparing alternatives. Based on cardinal consistency of preference relations, including addition consistency [15] and multiplication consistency [16], researchers have proposed different methods to define the consistency of HFLPRs. Both additive consistency and multiplicative consistency of HFLPRs have been widely discussed [17][18][19][20][21][22][23][24][25][26]. Ren et al. [17] proposed a GDM method based on consistency and consensus measurement of HFLPRs, and proposed a hesitant fuzzy linguistic geometric consistency index (HFLGCI) and a worst consensus index of HFLPRs. Xu et al. [18] proposed two additive consistency definitions for HFLPRs: completely additive consistency (CAC) and weakly additive consistency (WAC). Zhang and Wu [19] developed the multiplicative consistency of HFLPRs and defined a consistency indicator to measure the degree of deviation between the original and the consistent HFLPRs. Zhu and Xu [20] introduced an additive consistency concept of HFLPRs and developed some consistency and acceptable consistency measures for HFLPRs. Xu and Wang [21] proposed the additive consistency of hesitant 2-tuple fuzzy linguistic preference relations (H2TFLPRs) and proposed revised definition of H2TFLPRs based on Zhu and Xu [20]. Feng et al. [22] proposed an additively consistent definition of HFLPRs and developed goal programming models to measure consistency. Li et al. [23] proposed an interval consistency index of HFLPRs, which consists of the worst consistency index and the best consistency index of HFLPRs. Zhang and Chen [24] proposed a method to solve weight vector of HFLPRs, based on which the consistency index and acceptable definition of multiplicative consistency of HFLPR are defined. Liu et al. [25] established a new model to make HFLPRs achieve the maximum consistency degree. In addition, Liu et al. [26] calculated the missing elements of incomplete HFLPRs according to the best and worst consistency. In the existing studies, there are still many issues deserving further discussion and improvement. The consistency definitions [22,26] only take part of the original HFLPR information into account and tend to be too loose. In the consistency optimization models [19][20][21], the original preference relations need to be normalized, where the HFLTSs of HFLPR are further artificially processed to have the same length by adding or deleting some specified linguistic variables. Such process distorts the original preference information, and some preference values in the adjusted preference relations may not belong to the original linguistic term set. In consistency optimization models [19][20][21]24], almost all the preference values in the original preference relation are adjusted, which may not be accepted by corresponding DMs, and cannot keep the original preference information as much as possible. In addition, in the additive consistency model [20,21], the preference values obtained by additive consistency may be out of range of the predefined HFLTSs. In order to better deal with the above-mentioned issues, this paper proposes a new consistency concept and corresponding consistency improvement models for HFLPRs. The main novelties of this paper are listed as follows: (1) A new additive consistency definition of HFLPRs is introduced. This consistency definition takes each linguistic term of HFLPR into account. As a result, the consistency definition can fully reflect consistent information of original HFLPRs. (2) To judge if an HFLPR is additively consistent, some programming models are developed to measure the consistency of the HFLPR. The consistency test method does not add or remove any value of the HFLTS, which avoids distorting the original preference information of the HFLPR. (3) For an HFLPR not satisfying additive consistency, a consistency improvement model is developed to derive the corresponding consistent HFLPR, where only some preference values in the HFLPR are adjusted and no preference value is out of range of the predefined HFLTS. Moreover, the deviation between the corresponding consistent HFLPR and the original one is minimal, which ensures the new constructed additively consistent HFLPR keeps the original information as much as possible. (4) To determine the ranking order of the alternatives, hesitant fuzzy linguistic weight vector (HFLWV) is defined and the corresponding model is established to derive the HFLWV. The remainder of this paper is organized as follows. Section 2 introduces some fundamental definitions. In Section 3, new concept of additive consistency of HFLPR is introduced. Then, two programming models are developed to measure the consistency of an HFLPR. For inconsistent HFLPRs, a consistency improvement model is constructed to adjust the inconsistent HFLPRs. In addition, a programming model is constructed to derive the priority weights for additively consistent HFLPRs. In Section 4, some numerical examples and comparisons with the existing methods are presented to illustrate the effectiveness of the proposed method. Conclusions are given in Section 5. Preliminaries In this section, the relevant basic definitions and some consistency definitions for HFLPRs are reviewed. For decision-making problems under linguistic environment, a linguistic term set S = s 0 , s 1 , . . . , s g is always utilized, where g+1 is named the cardinality of S. Basic Definitions Definition 1 [27]. Let β ∈ [0, g] be a value derived from the result of a symbolic aggregation operation in S = s 0 , s 1 , . . . , s g . The equivalent information for β in the 2-tuple is obtained by the function as follows: where round denotes the rounding operation. Definition 2 [27]. Suppose S = s 0 , s 1 , . . . , s g is a linguistic term set and (s i , α) is a 2-tuple. There always exists the following function ∆ −1 which can transform a 2-tuple into its equivalent numerical value γ ∈ [0, g] . The function is defined as follows: In addition, let (s i , α) and s j , γ be 2-tuples, then: (1) If i < j, then (s i , α) is smaller than s j , γ . An HFLTS, H s = s σ(l) s σ(l) ∈ S, l = 1, 2, . . . , #H s , is an ordered finite subset of consecutive linguistic terms of S, where s σ(l) is the lth linguistic term of H s and #H s is the number of linguistic terms of H s . Definition 4 [2]. For HFLTS H s , there are the following operations: (1) Lower bound: H − s = min(s i ), ∀s i ∈ H s . (2) Upper bound: H + s = max(s i ), ∀s i ∈ H s . To rank HFLTSs, Liu and Jiang [29] defined the following score function: When two HFLTSs have the same score function value, their degree of accuracy can be compared by the following precision function. The larger the value of H(H s ), the better performance it has. Definition 5 [3]. Suppose S = s 0 , s 1 , . . . , s g is a linguistic term set. For two HFLTSs H 1 s and H 2 s , the distance between H 1 s and H 2 s can be measured by where H Definition 6 [30]. A fuzzy linguistic preference relation (FLPR) is defined as A = a ij n×n , where a ij ∈ S denotes the preference degree of alternative x i to x j , which satisfies Definition 7 [2]. Suppose S = s 0 , s 1 , . . . , s g is a linguistic term set. An HFLPR is expressed as H = h ij n×n , where h ij ∈ S represents the preference degree of alternative x i to x j , if the following conditions are satisfied: where h ij = h r ij |r = 1, 2, . . . , # h ij (#h ij is the number of linguistic terms in h ij ), h σ(r) ij is the rth linguistic term in h ij . Some Consistency Definitions Definition 8 [30]. An FLPR A = a ij n×n is additively consistent if According to the additive consistency definition of FLPRs [30], Feng et al. [22] proposed an additively consistency definition of HFLPRs. Definition 9 [22]. An HFLPR H = h ij n×n is consistent if there exists a consistent FLPR A = a ij n×n with a ij ∈ h ij , for i, j = 1, 2, . . . , n. As the sums of elements are not necessarily the same in HFLTSs of HFLPRs, Zhu and Xu [20] made each HFLTSs have the same length by adding or reducing some specified linguistic variables, and defined such HFLPRs as normalized HFPRs (NHFLPRs). Xu and Wang [21] proposed the definition of additively consistency of HFLPR based on the additively consistency definition of Zhu and Xu [20]. Definition 10 [21]. Given an HFLPR H = h ij n×n and its NHFLPR H = h ij n×n , if then H is a consistent NHFLPR. Furthermore, Xu and Wang [21] proposed a method to derive the corresponding additively consistency HFLPR for an original inconsistent one. Definition 11 [21]. Assume an HFLPR H = h ij n×n and its additively consistency NHFLPR then R = r ij n×n is a consistent NHFLPR. Models on Additive Consistency of HFLPRs In this section, new additive consistency definitions of HFLPRs are introduced, together with the corresponding consistency improvement method. For additively consistent HFLPRs, a programming model is proposed to obtain the priority weights. New Additive Consistency Definition of HFLPRs New consistency definition takes all linguistic elements into account and requires linguistic elements at any position to satisfy additively consistency condition. Definition 12. An HFLPR H where h r ij is the rth linguistic terms of h ij . Remark 1. It is easy to know that Definition 12 does not change with the different ordering of comparison objects in the HFLPRs. Based on Definition 12, the following programming model can be constructed to judge if an HFLPR H = h ij n×n is additively consistent: a kj ∈ h kj r = 1, 2, . . . , #h ij i, j, k = 1, 2, . . . , n H = h ij n×n is consistent according to Definition 12 if the target function value of Model 1 is equal to 0. Otherwise, it is not consistent. Since all the elements in HFLTSs are taken into account in Definition 12, M-1 is always fairly complicated. For simplicity, another simpler definition on additive consistency of HFLPRs is introduced, which only considers the boundary element. where h − ij and h + ij denote the upper and lower bounds of h ij , respectively. Theorem 1. Definition 12 is equivalent to Definition 13. Proof of Theorem 1: It is easy to know that if an HFLPR is additively consistent according to Definition 12 then it must also be additively consistent according to Definition 13. Thus, in what follows, only the inverse proposition needs to be proved. Thus, there only exist the following three cases: Let I ik and I kj be the corresponding integers in the intervals ∆ −1 (a ik ), ∆ −1 (b ik ) and ∆ −1 a kj , ∆ −1 b kj , respectively. Thus, the value domain of the sum of I ik and I kj contains all the integers in the intervals Thus, Equation (17) always holds. The proof is similar to (2) and is omitted. In summary, the inverse proposition abovementioned is proved, and Definition 12 is equivalent to Definition 13. According to Definition 13, the following integer programming model can be constructed to judge if an HFLPR is additively consistent: Consistency Improvement for HLFPRs In real life, due to the complexity of decision-making, DMs cannot guarantee that the original preference relations given are consistent. Therefore, in order to ensure effectiveness and reasonability of decision-making, an important process is to improve the consistency of the original preference relation not satisfying consistency. In this section, a programming model is built to adjust an inconsistent HFLPR to a consistent one, which retains the original information as much as possible. To facilitate calculation, only the upper triangle elements of HFLPRs are used to construct the model, in view of the symmetry of HFLPRs. where R = r ij n×n represents the adjusted HFLPR, and r − ij and r + ij denote the upper and lower bounds of R = r ij n×n , respectively. In what follows, an example is provided to illustrate the above procedure. Example 1 [9]. Let S be a linguistic term set defined as follows: S = s 0 = extremely poor, s 1 = very poor, s 2 = poor, s 3 = slightly poor, s 4 = f air, s 5 = slight good, s 6 = good, s 7 = very good, s 8 According to Model 2, the following model is constructed: By solving this model, minF = 132 is obtained, which means that HFLPR H = h ij n×n is not additively consistent. Thus, according to Model 3, the following model is constructed: Then, the adjusted HFLPR R = r ij n×n is obtained as follows: Models to Derive Priority Weights from Additively Consistent HFLPRs The main purpose of solving a decision problem is to find the optimal alternative according to the preference relation given by DMs, so deriving the weight vector is always performed in many existing literatures. Generally speaking, crisp and interval weight vector are always applied to solve decision-making problems based on crisp and interval preference relations, respectively. Accordingly, for decision-making problems with HFLPRs, applying weight vector denoted by HFLTSs is more natural and practical. Thus, in what follows, the definition of hesitant fuzzy linguistic weight vector (HFLWV) is introduced, which adopts several possible linguistic terms to express the importance of each alternative. Definition 14. Suppose S = s 0 , s 1 , . . . , s g is a linguistic term set. W = (w 1 , w 2 , . . . , w n ) T is called a hesitant fuzzy linguistic weight vector (HFLWV), if w i (i = 1, 2, . . . , n) is an ordered finite subset of consecutive linguistic terms of S, where w i = w r i |r = 1, 2, . . . , # w i (#w i is the number of linguistic terms in w i ) reflects the importance degree of the ith alternative. According to Equation (17) in Definition 12, each element h r ij is supposed to be expressed by specified a ik and a kj . If a ik and a kj can be determined as then h r ij can be expressed as follows: Based on this idea, the following model is constructed to derive the HFLWV, where its number of linguistic terms is as small as possible. In what follows, one example is provided to illustrate the above model. (5), the following score function can be obtained: By Equation (6), the following accurate functions can be obtained: Thus, the ranking results should be Alternative x 3 is the best option. Examples and Comparative Analysis In this section, some numerical examples are presented to illustrate the proposed methods. Example 3 [26]. Let S = {s 0 , s 1 , . . . , s 6 } be a linguistic term set, and consider the following HFLPR H: Liu et al. [26] improved the additive consistency of the HFLPR to an acceptable level (consistency threshold is 0.9), and the improved HFLPR is listed as follows: By Model 3, the following consistent HFLPR is derived: (7), distance between the original and adjusted HFLPRs can be obtained, where only the HFLTSs in the upper triangular matrix are considered. The results are showed as D 1 in Table 1. In addition, D 2 in Table 1 Table 1. As shown in Table 1, the distance between H and R Liu et al. is larger than that between H and R. Fewer HFLTSs are adjusted in R, compared with R Liu et al. . These results show that the adjusted HFLPR derived by this paper contains much more information of the original HFLPR. Example 4 [9]. Let S = {s 0 , s 1 , . . . , s 6 } be a linguistic term set, and consider the following HFLPR H: By the method of Chen et al. [9], the additive consistency of the HFLPR is improved to an acceptable level (consistency threshold is 0.95), and the improved HFLPR is listed as follows: Similar to Example 3, D 1 and D 2 are also applied, and the comparative results are given in Table 2: As shown in Table 2, the distance between H and R Chen et al. is larger than that between H and R. These results show that the adjusted HFLPR derived by this paper contains much more information of the original HFLPR. Example 5 [19]. Let S be a linguistic term set defined as follows, S = {s 0 , s 1 , . . . , s 8 } be a linguistic term set, and consider the following HFLPR H: [19] and Zhu and Xu [20] performed the normalization process, where some specified linguistic terms are added to HFLTSs with fewer elements until all the HFLTSs in HFLPR H have the same sum of linguistic terms. Such a process not only increases the burden of DMs but also easily distorts the original preference information. In addition, in Zhang and Wu [19] and Zhu and Xu [20], most of the original linguistic terms in the HFLPRs are adjusted and the modified elements no longer belong to the original linguistic term set, which may not be agreed to by DMs. By contrast, the method proposed by this paper does not have the above related problems. In the method of Xu and Wang [21], the normalization process is indispensable to ensure each HFLTS has the same length. In addition, it is worth mentioning that some modified elements, such as s 9.00 and s −1.00 , are out of the range of the original linguistic set S. Although such elements can be transformed into certain elements in the range of S, this transformation process may distort the preference information. On the contrary, the method proposed by this paper does not need the normalization process and maintains the objective of decision-making. In the method of Xu and Wang [21], the hesitant fuzzy linguistic averaging (HFLA) operator and a hesitant fuzzy linguistic geometric (HFLG) operator [5] are used to obtain the aggregated HFLTSs. Then, the ranking order can be determined by score function, which is listed in Table 3. Table 3. According to Table 3, the ranking order varies with different linguistic terms added in the normalization procedure of Xu and Wang [21]. The method of determining the ranking order in this paper avoids the above problems and keeps the objective principle of decision-making. Conclusions This paper introduces a new additively consistent definition for HFLPRs. For inconsistent HFLPRs, a model to improve its consistency is constructed. To obtain the ranking order, a new definition of HFLWV is introduced and a programming model is proposed to determine the weight vector. Compared with other existing methods, the method in this paper has the following characteristics: (1) The proposed additively consistent definition for HFLPRs takes all the elements of HFLTSs into account. Some existing additively consistent definitions [22,26] only consider local elements in HFLPR, which easily results in the loss of information. In addition, the proposed additively consistent definition does not require the HFLTSs in the same HFLPR to have the same length. In the normalization process in [19][20][21], the original opinions of DMs are inclined to be distorted. (2) Model 3, which proposed to improve additive consistency, takes all the elements of HFLTSs into account. Of course, in actual application, an equivalent Model 4 can be easily constructed and solved. The improved HFLPRs keep the original information of HFLPRs as much as possible. In the additively consistent HFLPRs obtained, all the elements are derived from the original linguistic term set. In the method of Zhang and Wu [19], Zhu and Xu [20], Xu and Wang [21], most of the original linguistic terms in the HFLPR are adjusted and no longer belong to the original linguistic term set. Even some the modified elements are beyond the range of the original linguistic set. (3) The HFLWV is based on the additively consistent definition proposed in this paper. The HFLWV can be directly obtained from a simple programming model, and the ranking order can be finally determined by score and accurate functions. In hesitant fuzzy linguistic decision-making, HFLWV may be more suitable and accepted by DMs. This paper mainly takes additive consistency of HFLPRs into account. In light of the fact that consensus issue is also an important topic in group decision-making, group decision-making modeling involving both consistency and consensus issues of HFLPRs will be carried out in our future studies. Author Contributions: The ideas and conceptual models of the paper were proposed by H.Z. and data calculation and paper writing were carried out by Y.D. All authors have read and agreed to the published version of the manuscript.
5,817
2022-08-04T00:00:00.000
[ "Mathematics", "Computer Science" ]
Financial mediation and its impact on the Albanian economy This article aims to present the importance of financial intermediation to real economic growth in the case of Albania. The article analyses the main indicators of financial intermediation and through the application of statistical and econometric methods gives their impact on economic growth. Findings show that there is a mutual relationship between economic growth and private sector credit growth. Also, the effect is a chain, because the positive impact that one indicator gives on another indicator is associated with a positive effect on the indicator itself that gave this effect earlier. On the other hand, besides the positive performance of financial intermediation indicators in Albania, as a result of the growth of lending to the economy, deposits in the Albanian banking system have increased throughout the period of study. The recent financial and economic crisis had its negativ e impact mainly on the growth of problem loans but not on the deposit market that continued to grow. Introduction The financial system is said to be the main engine of the market economy.To get a clear idea about this phenomenon, we need to clarify the role of the financial system in a market economy.This system provides the means of payment to the economy and influences its real activity through the realization of financial intermediation and the transfer of monetary policy. Given that in emerging economies including Albania, the financial system is often in line with the banking system, the treatment of this topic aims at drawing some conclusions that may apply to the entire financial system. The financial system has an irreplaceable role in economic activity.This system performs two main functions:  Financial system realizes the financial intermediation process:  Channellings savings (usually households) on loans and investments (usually firms). Financial institutions in Albania and above all banking institutions are the most developed component of the Financial System in Albania. Performance and analysis of the Albanian Banking System. Financial sector reforms marked significant progress in this period.They relate to the privatization of state-owned banks and to the entry of new private banks, which have affected the deepening of financial intermediation and the enhancement of the quality of banking services. Characteristics of these developments was the increase in the number of banks through new private banks, which currently reach 16.The banking activity has been expanding along with the expansion of banks in the market, increasing both banks' assets and deposits.At the end of 2003, total assets of the system amounted to ALL 373.6 billion or about 50% of GDP. On the other hand, the level of deposits has continuously increased from ALL 178.2 billion in 1998 to ALL 323.2 billion in 2003.The level of financial intermediation has further deepened reflecting the positive trend of developments in the banking sector.The ratio of total deposits to GDP, which is one of the main indicators of the level and depth of financial intermediation, has been increasing throughout the transition period and especially after 1998, marking a 43% level in 2003. The same tendency has the ratio of time deposits to GDP, as the most significant indicator of intermediation, which increased significantly from 12.8% in 1994 to 29.3% in 1998 and 35.3% in 2003.Another characteristic of positive developments in the banking sector is the improvement of the credit market.Two are the most positive trends seen in this market: firstly the continuous growth of private sector lending and secondly the reduction of non-performing loans to total credit.The continuous improvement of banking infrastructure, the establishment and functioning of the Deposit Insurance Agency in 2002, the improvement of banking supervision increased confidence in the banking system Cani (2004). Albanians had negative experiences of the 1997 pyramid schemes and the banking panic of 2002, where their confidence in the financial system was shaken.Again, when they saw the global financial crisis of 2007-2008 hit financial institutions worldwide, Albanian depositors began deposit withdrawals by the end of 2008, which reduced the funding sources available to banks to give credit.The crisis curbed the banks' tendency to credit the business.It affected the real sector and slamme d the economic growth rates, bringing a downward slump in the borrower's solvency, which made banks more sceptical about lending new loans during that period; The growth of credit risk, which was also materialized in the increase of the percentage of non-performing loans in the loan portfolio, may have been caused by two main phenomena.First, the exchange rate fluctuation, as about 50% of foreign currency loans were unprotected from exchange rate risk.Consequently, the borrowers found themselves vulnerable to the immediate exchange rate changes, causing their solvency to fall.Secondly, the emergence of problems in the loan portfolio came as a result of its maturity and this was an expected phenomenon, which probably just coincided with the global crisis but had no direct connection with it. During this period, the banking system continued to be characterized by a lack of liquidity.The Bank of Albania has injected liquidity through its open market operations by means of weekly repurchase agreements.Interest rates on eurodenominated time deposits generally followed the same seasonal movement, though not with the same margin. During 2016, the banking sector has been stable.Compared to the previous year, the activity of the sector has expanded at higher rates, mainly based on the growth of stock of securities investments.In annual terms, the growth rate of total banking sector assets was 6.8% , compared to 1.9% a year earlier.The ongoing process of delisting loans lost from banks' balances has slowed down the pace of rising non-performing loans.Despite this development, the banking sector continues to be exposed to credit risk.Exposure to market risks remains controllable, but requires regular monitoring and evaluation. Financing of banking activity is mainly ensured by deposits, which account for about 82% of total assets.Loan / deposit ratio marks 52% .The deposit base has grown almost at the same levels in both semi-annual and annual terms (about 5.2% ), supported by foreign currency deposits.Depending on maturity, there is a shift of deposits to current accounts.This development poses a potential weakness in the banking sector's financing structure, which, however, is mitigated by the fact that 83% of deposits are held by individuals and, as a whole, the residual maturity of liabilities has increased due to growth of the value of deposits with maturity over one year.Banks have maintained the ratio of their use of capital to finance the activity, while respecting the relevant requirements of the regulatory framework. The banking sector has accelerated lending to individuals, but has lowered the lending rate for businesses.In annual terms, the credit balance for households has expanded by 3.7% , while for businesses it has expanded by 2.4% . During the period, the new household loan increased by 14.3% over the same period of the previous year, with the main contribution to the expansion (about 60% ) being the loan for real estate purchase.Compared with the same period of the previous year, its share in new loans to individuals increased by 3 pp to 39% .By contrast, during the period, the new credit to businesses narrowed by 12.2% , driven by narrowing lending to "purchase of equipment", "working capital" and "overdraft".The public sector credit outstanding accounts for a low share of 4.8% of the total, but the new credit granted during the period for this sector accounts for 12.6% of the total. This graphic is a clearer idea that the credit structure tends to go from long-term to short-term.This can be considered positive because the short-term credit risk is lower than in the long run.The problem we think is that banks thinking of credit risk are not financing enough domestic business, which could give breathing to the economy. The aggregate index of key indicators used to track the performance and the banking stability situation has deteriorated since the end of 2015. Methodology used and findings of the study This study was conducted based on partial analysis of the financial intermediation indicators presented in Table 1. Source: Authors' processing Economic Growth is the indicator that is widely used in measuring economic performance.In this study this indicator was used as a dependent variable.The data were obtained from the Bank of Albania publications.The database is 3-month. Deposits / GDP refers to the depth of the financial system referred to Crowley (2008).This indicator expresses in the first place the level of public confidence in the banking system.Apportion of deposits with GDP is also done to eliminate possible "over takings" of the model indicator. Loans / Deposits, expresses the level of use of bank deposits and, above all, of time deposits which are the most likel y sources of credit.Sa (2006) best describes the linkage of lending rates to the welfare of a country.When the economic situation is optimistic and expectations for the future are better, expecting more revenue and profits, is being optimistic, also leads to an overestimation of assets, real estate prices.This increases the net worth of firms, reduces external financ e premiums, and increases their ability to borrow and spend. (Private sector credit x NPL) is an indicator derived from the calculations as factor interaction to see if it has an impact on the dependent variable.It was not used for further analysis since it did not result stylistically important during model testing. Based on the objective of the study we have formulated two hypotheses;  Hypothesis 1: The higher the lending to the private sector, the better the economic performance of a country is expected to be. Based on the least squares method we have the following results Source: Authors' processing Preliminary explanations: The amount of credit extended to the government  X2-The amount of credit extended to the private sector. By submitting the above, the loan granted to the government does not result in an important variable. According to Student's test "t", since the probability near the variables of this variable results in Prob = 0.0608> 0.05 the n it can be said that this variable is not significant.Otherwise it happens with the private sector lending variable which can not be removed from the model as it turns out important. Secondly, according to the Fisher model test, it generally results significant as Prob (F-statistic) = 0.000000 <0.05.However, this is not the best possible model. Lastly, referring to the above model, it can be said that private sector credit has had a positive impact on the growth of Albanian GDP for the years studied.  Hypothesis 2: The economic growth and public confidence in banks are expected to have an impact on the magnitude of credit extended to the private sector. Based on the least squares method we have the following results The impact of various factors on the development of credit extended to the private sector has been tested by means of the least squares method.Initially, four independent variables were taken into consideration (see Table 1), but only two of them proved to be significant during the test.Firstly, an economic growth of 1% , provided that the factor X2 does not change, would affect the growth of 0.012191 units of private sector / GDP credit. Second, an increase of 1% of the Deposit / GDP indicator, provided that constant X1, would affect the growth with 0.35736 units of the dependent variables.So the size with which financial intermediaries in Albania have credited the private sector over the years under analysis is largely determined by the size of deposits. Conclusions:  Referring to the analy ses carried out based on the data above, it can be said that private sector credit has had a positive impact on the growth of Albanian GDP from 2002 to 2017.At the same time, economic growth affects the growth of the private credit sector.Therefore, the effect seems to be a chain, where the positive feedback that one factor gives over another factor is associated with a positive effect on the factor that gave this effect earlier. The indicators are disclosed by reference to publications made by the Bank of Albania for the period 2002-2017 for a series of time based on 3-month data.Some other indicators such as the amount of bad credit loans is based on authors' calculations.The data were analysed from the first quarter of 2002 to the first three months of 2017.The project was carried out by making analyses and relevant econometric models depending on dependent and independent variables being considered according to the respective boxes.
2,888.2
2017-06-10T00:00:00.000
[ "Economics" ]
Long-distance transmission of quantum key distribution coexisting with classical optical communication over weakly-coupled few-mode fiber Quantum key distribution (QKD) is one of the most practical applications in quantum information processing, which can generate information-theoretical secure keys between remote parties. With the help of the wavelength-division multiplexing technique, QKD has been integrated with the classical optical communication networks. The wavelength-division multiplexing can be further improved by the mode-wavelength dual multiplexing technique with few-mode fiber (FMF), which has additional modal isolation and large effective core area of mode, and particularly is practical in fabrication and splicing technology compared with the multi-core fiber. Here, we present for the first time a QKD implementation coexisting with classical optical communication over weakly-coupled FMF using all-fiber mode-selective couplers. The co-propagation of QKD with one 100 Gbps classical data channel at -2.60 dBm launched power is achieved over 86 km FMF with 1.3 kbps real-time secure key generation. Compared with single-mode fiber, the average Raman noise in FMF is reduced by 86% at the same fiber-input power. Our work implements an important approach to the integration between QKD and classical optical communication and previews the compatibility of quantum communications with the next-generation mode division multiplexing networks Quantum key distribution (QKD) is one of the most practical applications in quantum information processing, which can generate information-theoretical secure keys between remote parties. With the help of the wavelengthdivision multiplexing technique, QKD has been integrated with the classical optical communication networks. The wavelength-division multiplexing can be further improved by the mode-wavelength dual multiplexing technique with few-mode fiber (FMF), which has additional modal isolation and large effective core area of mode, and particularly is practical in fabrication and splicing technology compared with the multi-core fiber. Here, we present for the first time a QKD implementation coexisting with classical optical communication over weakly-coupled FMF using all-fiber mode-selective couplers. The co-propagation of QKD with one 100 Gbps classical data channel at -2.60 dBm launched power is achieved over 86 km FMF with 1.3 kbps real-time secure key generation. Compared with single-mode fiber, the average Raman noise in FMF is reduced by 86% at the same fiber-input power. Our work implements an important approach to the integration between QKD and classical optical communication and previews the compatibility of quantum communications with the next-generation mode division multiplexing networks. I. INTRODUCTION Quantum key distribution (QKD) guarantees an information-theoretical secure generation of private keys between distant parties, based on the laws of quantum mechanics [1]. In the past 35 years, tremendous achievements of QKD have been accomplished [2,3]. Currently, the co-propagation of QKD and classical optical communication in the existing optical fiber-based network infrastructure is one of the most important solutions to promote the industrialization of QKD [4]. The wavelength-division multiplexing (WDM) scheme is a standard technique to combine QKD and classical signals [5]. In such scheme, the spontaneous Raman scattering (SRS) noise generated by classical optical signals is the dominant impairment for quantum signals [6]. The SRS noise can be suppressed by three major techniques including simultaneous filtering in the time and frequency domains [7], increasing the spectral interval between quantum and classical signals [8], and decreasing the launched power of classical optical communication [9]. Recently, space-division multiplexing (SDM) with additional available degrees of freedom of optical photons is attractive to enhance the WDM technique [10]. As a primary direction for the future evolution of optical networks, SDM was originally proposed to solve the "capacity crunch" of the single-mode fiber (SMF) [11]. Currently, SDM based on multi-core fiber for the integration between QKD and classical optical communication has been studied [10,12,13]. The more practical approach to implement SDM is using few-mode fiber (FMF), which is also called mode-division multiplexing [11]. Compared with multi-core fiber, the manufacturing process of FMF is relatively simple and the existing fusion equipment for SMF can be directly utilized for FMF [14,15]. Most importantly, the large effective core area of the mode and additional modal isolation of FMF is beneficial to suppress SRS noise [16]. In this paper, we present for the first time a mode-wavelength dual multiplexing QKD implementation over weakly-coupled FMF coexisting with classical optical communication. Using the all-fiber mode-selective couplers (MSCs) [17,18], the co-propagation distance of QKD with one 100 Gbps data channel at -2.60 dBm launched power reaches 86 km with a secure key rate of 1.3 kbps. At the same fiber-input power, compared with SMF, SRS noise in FMF is reduced by 86% in average. II. FEW-MODE FIBER CHARACTERIZATION FMF is based on the orthogonality of different linear-polarized (LP) modes. We use the weakly-coupled threelayer ring-core FMF, which has two circular-symmetric LP modes (LP 01 and LP 02 ) and four degenerate LP modes (LP 11 , LP 21 , LP 31 and LP 12 ) [17]. In this experiment, two modes, i.e., LP 01 and LP 02 , are used due to the effect of degenerate mode. To connect FMF and SMF, fiber-based MSCs are utilized as the mode multiplexer/demultiplexer (MUX/DEMUX), which are fabricated by heating and tapering the FMF and SMF. With the help of MSCs, the mode of optical pulses can be transformed each other between the fundamental mode of SMF and the specific LP mode of FMF [18]. The attenuation and the insertion loss (IL) of MSC as the MUX/DEMUX in each LP mode are characterized, as listed in Table 1. We first characterize the overall modal isolation provided by both the FMF and MUX/DEMUX between different LP modes. Figure 1 illustrates the schematic structure of the experimental setup and mode patterns observed by a charge-coupled device. A continuous laser at 1546.92 nm of 0 dBm is launched into a or b port, and then, the optical power emitted from c and d ports are measured. The output power difference between c and d ports are calculated and listed in the modal isolation matrix, as given in Table 2. The LP 01 in (LP 02 in) scheme refers to the case that the classical light launched from a (b) port is assigned to the LP 01 (LP 02 ) mode, and the modal isolation from the LP 02 (LP 01 ) mode to the LP 01 (LP 02 ) mode is recorded. The modal isolation of cascaded mode MUX/DEMUX corresponds to the back-to-back (BTB) case. Due to the perturbations of optical fiber such as macrobending and microbending, the overall modal isolation decreases as the transmission distance [19]. The QKD system is operated with a repetition rate of 625 MHz based on polarization-encoding decoy-state BB84 protocol [20][21][22]. In the QKD transmitter, as shown in Fig. 2(b), the intensities of signal state µ, decoy state υ, and vaccum state ω are 0.4, 0.2 and 0 photons/pulse, respectively, with the corresponding emission probability of 6:1:1. Different intensities are adjusted by the first Sagnac interferometer and the four polarization states are produced by the second Sagnac interferometer [23]. For both Sagnac interferometers, the DL is tuned to 16.3 cm, and optical pulses entering the PM from the short arm of interferometer are modulated. The NPF1 and DCF are employed to reduce the effect of chromatic dispersion on QKD pulses. The QKD receiver includes polarization beam splitters, polarization controllers and InGaAs/InP single-photon detectors (SPD) [24]. An EDFA is applied to amplify the Sync pulses to guarantee its detection. The NPF2 with the central wavelength matching the NPF1 is used to filter out SRS noise. Four SPDs are adopted, and the detection efficiency, gate frequency, and dark count rate per gate for each one is 10%, 1.25 GHz, and 3.0 × 10 −7 , respectively [25,26]. Furthermore, stable real-time secure key generation with a block size of 500 kbits is accomplished by post-processing including error correction, error verification, and privacy amplification [6]. To integrate QKD and classical signals over FMF, we use the LP 01 in (LP 02 in) scheme, where the classical data channel and QKD are allocated to the LP 01 (LP 02 ) mode and the LP 02 (LP 01 ) mode, respectively. QKD and Sync pulses are multiplexed into the same LP mode using a 1550.12 nm DWDM. As a comparison, the co-propagation of QKD with the same classical data channel is also presented over SMF. The fiber loss of quantum signal and the classical data channel is 0.190 dB/km and 0.192 dB/km, respectively. The MUX/DEMUX for SMF is one 1546.92 nm DWDM with the IL of 0.49/0.36 dB. Considering the difference in IL between the MSC and DWDM, the fiber-input power, which refers to the output power after passing through the MUX, is calibrated to -2.60 dBm. IV. RESULTS AND DISCUSSION In the experiment, since there is only one classical data channel allocated to one LP mode with optical power less than 1 mW, the SRS noise is the main factor of limiting the transmission distance of QKD over FMF [27]. As for SMF, Raman noise is also the main limitation for the co-propagation of QKD and classical signals [9]. We measure SRS noise at different distances in three multiplexing schemes, and the average SRS coefficients are calculated as 12076, 2637, and 2655 cps/(mw · km) for SMF, LP 01 in, and LP 02 in, respectively [28]. Figure 3(a) plots the measured data and the simulated SRS noise as a function of distance. Compared with SMF, the SRS noise in FMF is reduced by 86%. For instance, in the LP 02 in scheme, the SRS noise generated by the classical signal in the LP 02 mode is also in the LP 02 mode with high possibility, which can be effectively filtered after mode DEMUX. In addition, SRS noise can be further suppressed due to the large effective core area of the mode [16]. Figure 3(b) plots the measured and simulated secure key rates over FMF and SMF [29]. At the secure transmission distance of 63 km, 65 km, and 86 km, we achieve secure key rates of 2.3 kbps, 1.2 kbps, and 1.3 kbps for SMF, LP 01 in, and LP 02 in, with a quantum bit error rate (QBER) of 4.0%, 3.8%, and 3.7%, respectively. At distances less than 45 km, the secure key rate of SMF is slightly higher than that of FMF, due to the low losses of DWDM and SMF. However, the QBER caused by excessive SRS noise exceeds the correction efficiency of the system, which results in a sharp drop in the secure key rate over long distance. As for the LP 02 in scheme, the longest distance transmission is achieved because of the advantage of FMF in suppressing SRS noise. The secure transmission distance of LP 01 in is restricted by the high loss for QKD pulses assigned to the LP 02 mode. The maximum secure transmission distance of the LP 02 in scheme can be further extended. Figure 4 plots the simulations of secure key rate, aiming to the possible improvements in the future. The first factor limiting the co-propagation distance is the fiber-input power. The SRS noise generated by classical optical signals is directly proportional to the fiber-input power of classical optical signals [28]. If the optical power budget of the classical optical communication is sufficient, the fiber-input power can be reduced, which means that the minimum power is the sum of the overall loss of the FMF link and the receiving sensitivity of PA. Particularly, the secure key rate can be increased at short distances, as plotted in the dashed line of Fig. 4. The second factor limiting the co-propagation distance is the loss of FMF and MSCs. Owing to advances in manufacturing technology, the attenuation coefficient of FMF can in principle reach the level as low as the ultralow-loss SMF, i.e., 0.16-0.17 dB/km. Similarly, the IL of MSCs can be reduced to the level as the conventional DMDM, i.e., 0.36-0.49 dB. As a result, both the secure key rate and transmission distance are significantly increased, as plotted in the dash-dotted line of Fig. 4. The third factor limiting the co-propagation distance is the performance of SPD. QKD highly depends on the parameters of SPD. Given the parameters in Ref. [9], i.e., 20% detection efficiency and 230 cps dark count rate, as well as the above improvements, the maximum secure transmission distance can reach as long as 185 km, as plotted in the solid line of Fig. 4. V. CONCLUSION In summary, we have presented for the first time the QKD implementation coexisting with 100 Gbps data channel over weakly-coupled FMF using the fiber-based MSCs, achieving the transmission distance of 86 km with a secure key rate of 1.3 kbps. We have discussed the possible improvements in the future, and the simulation results show that the maximum secure transmission distance can reach 185 km over FMF. Due to the advantages of FMF, including additional modal isolation, large effective core area, simple fabrication, and cost-effective fiber fusion splicers, our work provides a practical approach to integrate QKD and classical optical communication over the telecommunication fiber-optical infrastructure.
3,010
2020-02-02T00:00:00.000
[ "Engineering", "Physics" ]
New SAMD9L heterozygous mutation leading to myelodysplastic syndrome and acute myeloid leukemia: A case report and review of the literature Abstract Background SAMD9L mutation is linked to the development of myeloid neoplasm. The mutation has a wide range of clinical presentations involving neurological, immunological, and hematological manifestations. Until now, limited data regarding different variants of this genetic mutation existed. Here we present a 6‐year‐old girl who presented with acute myeloid leukemia/myelodysplastic changes and who carries a new germline variant mutation in the SAMD9L gene. Case Presentation A 6‐year‐old girl who presented initially as a case of immune thrombocytopenic purpura (ITP) was later diagnosed with acute myeloid leukemia and myelodysplastic changes. In addition, she was found to have a new germline variant mutation in the SAMD9L gene (other known pathogenic variants known to cause ataxia pancytopenia syndrome). She was treated with chemotherapy followed by haplo identical transplant from her unaffected father. She is alive 30 months post‐transplant and in complete remission with full donor chimerism. Her initial brain MRI showed mild prominence of the anterior (superior) vermis folia, suggesting mild atrophy. Ongoing surveillance for accompanied neurological manifestation is ongoing, although the patient is asymptomatic. Conclusion For SAMD‐9L‐related disorder, a careful approach must be taken when a patient presents with a suspicious clinical feature even without a well‐known genetic mutation giving the diverse presentation across affected members within the same family. In addition, other associated abnormalities should be monitored long‐term. ant mutation in the SAMD9L gene (other known pathogenic variants known to cause ataxia pancytopenia syndrome). She was treated with chemotherapy followed by haplo identical transplant from her unaffected father. She is alive 30 months posttransplant and in complete remission with full donor chimerism. Her initial brain MRI showed mild prominence of the anterior (superior) vermis folia, suggesting mild atrophy. Ongoing surveillance for accompanied neurological manifestation is ongoing, although the patient is asymptomatic. Conclusion: For SAMD-9L-related disorder, a careful approach must be taken when a patient presents with a suspicious clinical feature even without a well-known genetic mutation giving the diverse presentation across affected members within the same family. In addition, other associated abnormalities should be monitored long-term. F I G U R E 1 Dot-plot histograms from flow cytometric analysis of the bone marrow aspirate from our patient show SSC/CD45 dim population as the blast (25% cell population in blast region). Blasts (red populations) show positivity for CD13, CD33, CD34, CD38, CD117, CD123, & HLADR with partial expression of CD34; dim expression of CD4 and MPO; partial/dim expression of CD7, CD15 and CD19. The flowcytometric analysis used BD FACSDiva™ Software. CD45 gating was applied (CD45-SSC). Events ranged from 30 000 to 50 000 events; percentages of each population are displayed for selected population of interest in each plot in relation to total population (Total population defined as total numbers of events in the tube of bone marrow sample). | CASE REPORT A 6-year-old girl was admitted to our hospital (King Abdul-Aziz Medical City-Jeddah) as a case of pancytopenia for investigations. She had a history of isolated thrombocytopenia for 3 years prior to her presentation. She was treated as a chronic immune thrombocytopenic purpura with intravenous immunoglobulin (IVIG) and observation. She had a febrile illness for 3 weeks before her admission, and her complete blood count (CBC) showed pancytopenia with 7% peripheral blast. Otherwise, her past medical history was unremarkable. She has a 10-year-old sister who is healthy and has no history of consanguinity between her parents. Her family history was insignificant for cancers, inherited bone marrow failure, or MDS. Her physical exam was unremarkable for any dysmorphic feature, telangiectasias, and she had a normal neurological examination. Her lab work showed normal immunoglobulins (9.92 g/L, reference range 5.40-13.60 g/L) and alpha-fetoprotein levels (<2 ng/mL, reference range 1-4 ng/mL). | Initial bone marrow assessment Bone marrow aspirate showed active erythropoiesis with megaloblastic changes and dysplastic nucleus. Granulopoiesis was reduced, leftshifted, and dysplastic. Megakaryocytes were reduced and dysplastic. She has a 10/10 matched sibling, but due to serious concern of underlying inherited BMF/MDS, we sent a whole-exome sequence (WES) analysis. Her WES came back +ve for SAMD9L heterozygous mutation of unknown significance (based on ACMG recommendation). The variant was c.1949A > G p.(Glu650Gly) chr7:92763336. In the literature, the pathogenic variants in the SAMD9L gene lead to ataxia pancytopenia syndrome with a predisposition to AML, MDS, and bone marrow failure, which is present in this case. A segregation study for the family indicated that both her mother and sister have the same mutation but are asymptomatic with normal CBC. A magnetic resonance image (MRI) brain was performed even without neurological manifestations and showed mild prominence of the anterior (superior) vermis folia, suggesting mild atrophy with no other brain abnormalities | CONCLUSION This case illustrates the current established clinical feature of a form of SAMD9L genetic mutation, which can cause ataxia pancytopenia syndrome. This variant was not described previously but highlights that additional knowledge of this entity will allow easier detection in the future. ACKNOWLEDGMENTS Writer would like to thank all team members who had been involved in treating this young girl during difficult time of COVID-19 pandemic. CONFLICT OF INTEREST STATEMENT The authors have stated explicitly that there are no conflicts of interest in connection with this article. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions. ETHICS STATEMENT This manuscript was approved by our institutional ethics board. Informed consent was taken from the patient's legal guardian.
1,295
2023-03-07T00:00:00.000
[ "Medicine", "Biology" ]
Manifold Ways to Darboux-Halphen System Many distinct problems give birth to Darboux-Halphen system of differential equations and here we review some of them. The first is the classical problem presented by Darboux and later solved by Halphen concerning finding infinite number of double orthogonal surfaces in $\mathbb{R}^3$. The second is a problem in general relativity about gravitational instanton in Bianchi IX metric space. The third problem stems from the new take on the moduli of enhanced elliptic curves called Gauss-Manin connection in disguise developed by one of the authors and finally in the last problem Darboux-Halphen system emerges from the associative algebra on the tangent space of a Frobenius manifold. Introduction The Darboux-Halphen system of differential equationṡ where τ is a free parameter, first came to existence when Darboux [9] was studying the existence of an infinite number of double orthogonal system of coordinates. He formulated the problem as follows: Let A and B be two f ixed surfaces in the 3-dimensional Euclidean space R 3 and suppose that Σ is the family of surfaces which are the locus of the points such that the sum of their distances from the surfaces A and B are constant; and Σ is the family of surfaces which are the locus of the points so that the difference of their distances from the surfaces A and B are constant. Is there a third family of surfaces intersecting Σ and Σ orthogonally? When we restrict the third family to the surfaces given by second degree equations, we find the Darboux-Halphen system. In Section 2 we present Halphen's solution to this problem. The Darboux-Halphen system also emerge from a direct map from Ramanujan relations (Section 3). In 1979, Gibbons and Pope [14] found the Darboux-Halphen equations while studying gravitational instanton solutions in Bianchi IX spaces without having noticed it. Couple of decades later, Ablowitz et al. [1] pointed it out and recently one of the authors [8] explored its integrability aspects. A gravitational instanton is simply the (anti-)self-duality condition imposed on the curvature of a Einstein manifold with asymptotic locally Euclidian boundary conditions. Hitchin [19] and Tod [28] realized that (anti-)self-duality in Bianchi IX metric has a more general solution envolving to a Darboux-Halphen system coupled to another system of linear differential equation similar to Darboux-Halphen. A revised and simplified proof of the results of Tod and Hitchin can be found in [4]. See [21] for a physical application in cosmology. We review these works in Section 4. Another author [23,24] among us met the Darboux-Halphen system while exploring the Gauss-Manin connection of a universal family of elliptic curves. This method is called Gauss-Manin connection in disguise, which also name the vector field in this method that gives rise to the Darboux-Halphen equations and we present it in Section 5. The last interesting problem where Darboux-Halphen system appears is in the context of a 3-dimensional Frobenius manifold with a certain potential function F (t). Frobenius manifold arose as a geometrization of Witten-Dijkgraaf-Verlinde-Verlinde (WDVV) equations [10,30], an overdetermined system of differential equations that appear in the physics of topological field theories in 2 dimensions. In this particular case of dimension 3, the WDVV equation is known as Chazy equation, which has a close tie with the solutions of Darboux system. We present it in Section 6, where we follow Dubrovin's notes [11,12]. We conclude this article, crossing information between problems displayed here, which led us to interesting remarks and an evidence that leads to a new way on how to examine spectral curves from monopoles using Gauss-Manin connection in disguise, further explored in [29] by one of the authors. Throughout this text we make extensive use of Einstein summation convention where the sum over identical upper and lower indices is implicit. The Darboux problem The above Darboux problem given in Section 1 is equivalent to the following problem: Let A and B be as before and suppose that Σ is a family of surfaces parallel to A which is parameterized by v, and Σ is a family of surfaces parallel to B that is parameterized by w. Is there a third family of surfaces parameterized by τ such that it intersects Σ and Σ orthogonally? Note that, two surfaces A 1 and A 2 are said to be parallel, if there exist a constant c ∈ R =0 and a continuous one to one map between points a 1 ∈ A 1 and points a 2 ∈ A 2 , such that the tangent planes at these points are parallel and the position vector a 2 = a 1 + cN, whereN is the unitary vector normal to the surface A 1 at a 1 . We say that a family of surfaces is parameterized by ϕ = ϕ(x, y, z), if any surface belonging to this family is given by ϕ(x, y, z) = const, in which x, y, z are the standard coordinates of R 3 . If for a function ϕ = ϕ(x, y, z), we define then in the latter problem, Darboux chose a case of parametrization of parallel surfaces that gives the Gauss map for points on the parallel surfaces and the condition of orthogonality at points in the intersection of v and τ and w and τ , respectively, is given by So the problem is equivalent to the following system of equations, If for a function ϕ of three variables (x, y, z) we define the operator then equations (2.1), (2.2), (2.3) and (2.4), respectively, are given by The situation is more interesting when the family (τ ) is of second degree. Hence let us suppose that the family (τ ) is given by where a, b, c are functions of the parameter τ . By this assumption, equations (2.3) and (2.4) yield axv x + byv y + czv z = 0, axw x + byw y + czw z = 0. (2.7) As well, from equation (2.5) we get Equations (2.1), (2.6), (2.7) and (2.8) imply in which = d dτ . Analogously for w we find The two equations in (2.9) and the two equations in (2.10) become equivalent if If in (2.6) we substitute a, b, c respectively by 1 t 1 , 1 t 2 , 1 t 3 , then from (2.11) we get that the family is orthogonal to both Σ and Σ if t 1 , t 2 , t 3 satisfy the following A particular case of the equation (2.12), which is known as Darboux-Halphen system, is given in (1.1). In 1881, G. Halphen [15] studied this system of differential equations and expressed a solution of it in terms of the logarithmic derivatives of the theta functions; namely, These theta functions can be written in terms of the more general theta functions with characteristics r and s and arguments z and σ: such that Ramanujan relations between Eisenstein series The following differential equation where E i 's are the Eisenstein series , was discovered by Ramanujan in [27] and it is mainly known as Ramanujan's relations between Eisenstein series. Ramanujan was a master of formal power series and had a very limited access to the modern mathematics of his time. In particular, he and many people in number theory didn't know that the differential equation (3.1) had already been studied by Halphen in his book [16, p. 331], thirty years before S. Ramanujan. The equalities of the coefficients of gives us a map from C 3 into itself which transforms Darboux-Halphen into Ramanujan differential equation, see [24, pp. 330, 335]. Self-duality in Bianchi IX metrics An instanton is a field configuration that vanishes at spacetime infinity. It is the quantum effect that leads metastable states to decay into vacuum. It is a phenomenon that takes place in usual spacetime with signature (−, +, +, +) but in order to perform physical calculation we use its equivalence with a soliton solution (static and energetically stable field configuration) in Euclidean spacetime. In Yang-Mills theory, self-duality of the field strength F µν = µνρσ F ρσ in four spacetime dimensions is a widely known instanton configuration [5]. Similarly, self-duality constraint on the curvature two-form (and connection 1-form) in Cartan's formalism of general relativity characterizes a gravitational instanton. An important feature of self-duality of the curvature is that the Ricci-tensor vanishes and it is a solution of the vacuum Einstein equations. Also, self-dual curvature leads to solving a linear differential equation, a task much easier than solving the full non-linear Einstein equations. Gravitational instantons were found in Bianchi IX metrics, by Gibbons and Pope [14]. Without realizing it, they arrived at Darboux-Halphen system from self-duality constraints. In [6], L. Bianchi studied continuous isometries of 3-dimensional spaces. He noticed that the continuous isometries (continuous motion that preserve ds 2 ) of a space form a finite-dimensional Lie group and he classified such spaces according to the corresponding group of isometries. Bianchi IX corresponds to a 3-dimensional space with SO(3) or SU(2) as Lie group of isometries. When we consider it in the context of 4-dimensional cosmology, the isometries lie in the 3 spacial directions [21], but since we are working in Euclidean signature we consider the isometry group SO(3) as a subgroup of SO (4). In this configuration, as the instanton vanishes at infinity, Lorentz symmetry is recovered and the space is called asymptotically locally Euclidean (ALE). This same manifold describes the reduced 1 moduli M 0 2 of charge 2 monopoles in a SU(2) Yang-Mills-Higgs theory. A magnetic 2-monopole is a soliton solution of charge 2 of Bogomolny equations in the Yang-Mills-Higgs theory in R 3 , where SU(2) Yang-Mills is a gauge theory of 1-form connections A on a principal SU(2)-bundle while the Higgs field Φ correspond to a section of an associated su(2)bundle [3,20]. In [3], Atiyah and Hitchin showed that the reduced moduli M 0 2 of 2-monopoles is a 4-dimensional hyperkähler manifold and an anti-self-dual (curvature-wise) Einstein manifold. Since M 0 2 admits SO(3) isometry, the metric is a Bianch IX 2 (4.2). This is a consequence of the hyperkähler structure of M 0 2 which has an S 2 -parameter family of complex structures, i.e., if I, J, K are covariantly constant complex structures in M 0 2 then aI + bJ + cK is also a covariantly constant complex structure in M 0 2 given that a 2 + b 2 + c 2 = 1. Here we present a detailed derivation of the Darboux-Halphen system starting from the Euclidean Bianchi IX metric with SO(3) symmetry with an imposition of the constraints of self-duality at the level of Riemann curvature. The constraint of anti-self-duality yields an antiinstanton, a solution with negative instanton number and we present this solution together by using ± sign. We follow the steps of [8] and [14], see also [26]. The connection wise self-duality is a stronger form of self-duality that leads to self-dual curvature tensor [13]. This form of self-duality does not present Darboux-Halphen system, but the Lagrange or Euler-top system [8]. It is not in our goal to describe it here. One can perform a standard analysis using vierbeins, leading to Cartan's structure equation. The vierbeins could be chosen as and the connection 1-form can be obtained from the structure equation where a, b = 0, 1, 2, 3. Obviously, e 0 produces no connections while other three does 3) The first term on the r.h.s. above gives ω i 0 while the second term needs to be rewritten in order to produce a antisymmetric connection 1-form Rewriting (4.3), Hence, Here the connection 1-form components are anti-symmetric under permutation of its indices. Curvature wise self-duality and Darboux-Halphen system Curvature-wise self-duality was first studied in search of gravitational instantons. It is a more general solution than imposing self-duality on connection 1-forms. The Cartan-structure equation for Ricci tensor is The (anti-)self-duality of curvature demands that where {i, j, k}, in this order, are a cyclic permutation of {1, 2, 3} and we used the fact that Euclidean vierbein indices are raised and lowered with Kronecker deltas δ i j . Comparing the l.h.s. and r.h.s. of (4.5), we have where the second line comes from equation (4.4) with λ 1 , λ 2 , λ 3 being functions of r. But the third line and (4.1) show that λ i 's are constants and λ 1 = ± 1 2 λ 2 λ 3 . From cyclicity of i, j, k we obtain two more copies of (4.6). Therefore, The first case leads to self-dual connection 1-forms and Euler-top system, while the second case can be resumed to λ 1 = λ 2 = λ 3 = ±2 by an appropriate change of sign in c i [14]. Therefore, from equations (4.4) and (4.6) we get One may suppose that we must parametrize the l.h.s. to match the linear form in c 2 i , c 2 j and c 2 k of the r.h.s. in the equation above. Essentially, the derivative operator aside, c 2 i must be parametrized such that ln c 2 i = ln Ω j + ln Ω k − ln Ω i + const = ln We choose new parametrization which enable us to decouple the individual parameters into their own equations turning into simpler expressions. This allows us to continue our analysis Adding up the above equation with cyclic permutations of i, j, k we will find that (anti-)self-dual cases of the Bianchi IX metric gives uṡ where throughout derivative (denoted by dot) is taken with respect to r. Self-duality proceeds to give us the classical Darboux-Halphen systeṁ General Bianchi IX self-dual Einstein metric Following [4], we rewrite the Bianchi IX by adding a conformal scaling term F in the metric where t is the cosmological time and different from before, here the isometry is SU (2) and (σ i ) are the corresponding SU(2) invariant forms along the spacial directions with structure constant We define the new variables A i (t) by the equations for distinct i, j and k taking values in the set {1, 2, 3}. The curvature-wise self-duality condition is expressed in terms of the new variables A i in the form of the Darboux-Halphen system Therefore we find Ω i 's by first solving system (4.8) and applying its solution in (4.7). A nontrivial solution is given by (2.13) For simplicity, we rename ϑ 2 ≡ θ 2 (it), ϑ 3 ≡ θ 3 (it), ϑ 4 ≡ θ 4 (it). The system (4.7) thus becomes There is a class of solutions of this system that satisfies vacuum Einstein equations Rg ab + Λg ab = 0, once we choose the appropriate conformal factor F [28]. This class depend on the values of the cosmological constant Λ and satisfy the constraint The general two-parametric family of solutions of the system (4.9) satisfying condition (4.10), is given by the following formulas , , where ϑ[p, q] denotes the theta function ϑ[p, q](0, ir), p, q ∈ C. The corresponding metric is real and satisfies the Einstein equations for negative cosmological constant Λ if p ∈ R and R{q} = 1 2 (real part of q) or for positive cosmological constant if q ∈ R and R{p} = 1 2 . In both the cases the corresponding conformal factor is given by There is another family of solutions with q 0 ∈ R, that defines manifolds with vanishing cosmological constant if Gauss-Manin connection in disguise In this section we explain how one can derive the Darboux-Halphen equations from the Gauss-Manin connection of a universal family of elliptic curves. This has been taken from the references [23,24]. The family of elliptic curves is the universal family for the moduli of 3-tuple (E, (P, Q), ω), where E is an elliptic curve and ω ∈ H 1 dR (E)\F 1 . There is a unique regular differential 1-form in the Hodge filtration ω 1 ∈ F 1 , such that ω, ω 1 = 1 and ω, ω 1 together form a basis of H 1 dR (E). P and Q are a pair of points of E that generate the 2-torsion subgroup with the Weil pairing e(P, Q) = −1. The points P and Q are given by (t 1 , 0) and (t 2 , 0) and ω = xdx y and ω 1 = dx y . The Gauss-Manin connection of the family of elliptic curves E t written in the basis dx y , xdx y is given as bellow The reader who is not familiar with the Gauss-Manin connection must replace ∇ with d δt , where t i 's are assumed to depend on some parameter τ , d = ∂ ∂τ and δ t is a 1-dimensional homology class in E t . In the parameter space of the family of elliptic curves E t there is a unique vector field R, such that The vector field R is given by the Darboux-Halphen system (1.1) and it is called Gauss-Manin connection in disguise. Frobenius manifolds and Chazy equation Frobenius manifolds were developed in order to give a geometrical meaning to WDVV equations: where F (t), with t = (t 1 , t 2 , . . . , t n ), is a quasi-homogeneous function on its parameters. The above equations conceal properties of an associative commutative algebra on the tangent space of a manifold M of dimension n defined by the parameter space (t 1 , t 2 , . . . , t n ). That's the essence of a Frobenius manifold that we will detail below starting with the algebraic structure in T M . Frobenius algebra An algebra A over C is Frobenius if • it is a commutative associative C-algebra with unity e, • it has a C-bilinear symmetric non-degenerate inner product which is invariant, i.e., a.b, c = a, b.c Properties: Let e α , α = 1, . . . , N , be any basis in A, such that e 1 = e is the unity. By notation, we define η αβ := e α , e β , which yields the matrix η := [η αβ ] 1≤α,β≤N and its inverse η −1 := [η αβ ] 1≤α,β≤N , and it follows η αβ η βγ = δ α γ . By writing e α · e β in the given basis, we find the structure constants c γ αβ defined by e α · e β = c γ αβ e γ . If we set c αβγ = c αβ η γ , then we get c γ αβ = c αβ η γ . Note that in all above expressions, and in what follows, Einstein summation of indices is implicit. Therefore, η αβ and the structure constants c γ αβ satisfy commutativity η αβ = η βα , (6.1) invariance&commutat. c αβγ = e α e β , e γ = c βαγ = c αγβ . (6.4) Now consider an n-parametric deformation of the Frobenius algebra A t , t = (t 1 , t 2 , . . . , t n ), with structure constants c γ αβ (t) preserving relations (6.1) to (6.4). Such deformed algebra A t can be seen as a fiber bundle with the space of parameters t ∈ M as base space. We identify this fiber bundle with the tangent bundle T M to arrive at the definition of a Frobenius manifold. The requirements for this to happen are presented in the definition below. Frobenius manifold A Frobenius manifold M of dimension n, is an n-dimensional Riemannian manifold, such that for all t ∈ M the tangent space T t M contains the structure of a Frobenius algebra (A t , , t ), satisfying the following axioms: A.1. The metric , t on M is flat. The unit vector e must be flat, i.e., ∇e = 0, where ∇ is the Levi-Civita connection for the metric. A.2. Let c be the 3-tensor c(x, y, z) = x.y, z , with x, y, z ∈ T t M . Then the 4-tensor (∇ w c)(x, y, z) must be symmetric in x, y, z, w ∈ T t M . A.3. A linear vector field E must be fixed on M , i.e., ∇(∇E) = 0 such that the corresponding one-parameter group of diffeomorphisms acts by conformal transformations of the metric , and by rescaling on the Frobenius algebras T t M . The flatness of the metric , implies the existence of a system of flat coordinates t 1 , . . . , t n on M . In these flat coordinates the structure constants of A t are given by Potential deformation. If there is a function F (t), called potential, such that the structure constants of A t , t ∈ M , can be locally represented as satisfying A.2 with unity vector e = ∂ ∂t 1 , and the metric given by satisfying A.1, and the associativity property (6.2) represented by the WDVV equations then M is a Frobenius manifold and the Frobenius algebra A t is called a potential deformation. Note that the condition A.3 is satisfied by a quasihomogeneous function F (t). Chazy equation and Darboux-Halphen system In this section we explain how the Chazy equation arises from a 3-dimensional Frobenius manifold. We follow Dubrovin's notes [11,12]. Let dim M = 3, and consider the potential function where γ(τ ) is an unknown 2π-periodic function that is analytic at τ = i∞. Then the associativity condition (6.5) leads to the Chazy equation The solution, up to a shift in τ , is given by γ(τ ) = πi 3 E 2 (τ ), where E 2 is the weight-2 Eisenstein series. Notice that the Darboux-Halphen solution (2.13) leads to which can be easily checked from (3.2) or by writing the theta functions in terms of Dedekind eta function, see [7,Chapter 3,p. 29]. Applying τ derivatives on both sides and using Darboux-Halphen equations, one can also check that the solution to Darboux-Halphen system (2.13) are the roots of the cubic equation Conclusion The study of Darboux-Halphen equations in several different problems in theoretical physics and mathematics raised more and more questions that eventually lead us to further studies. The problem involving Gauss-Manin connection in disguise lies at the center of some questions. It shows that the Darboux-Halphen system corresponds to a vector field in the moduli of an enhanced elliptic curve. As mentioned in Section 4, the Bianchi IX four-manifold (4.2) also describes the reduced moduli of 2-monopoles and its self-dual curvature equations can be reparametrized to the Darboux-Halphen equations. Furthermore, in the problem of 2-monopoles it has been found that a 2-monopole solution relates to an elliptic curve as its spectral curve [17,18,20]. Therefore, we believe that Gauss-Manin connection in disguise is a new way to demonstrate the association of spectral curves and the curvature equations of the moduli of monopole solutions. Starting from these coincidences, in [29] one of the authors started to find more evidences to support this idea. Another interesting remark is the fact that potential functions and structure constants in Frobenius manifolds correspond to prepotentials (or genus zero topological partition function) and Yukawa couplings in topological string theory and Gauss-Manin connection in disguise has been used in the moduli of enhanced Calabi-Yau varieties to find polynomial expressions for Yukawa couplings and higher genus topological partition functions [2,25]. It would be interesting to find cases where the moduli of enhanced Calabi-Yau varieties are also Frobenius manifolds. In particular, the Frobenius manifold presented in Section 6.3 is a case of modular Frobenius manifold where the prepotential is preserved under a inverse symmetry that acts as an S generator of the modular group SL(2, Z) in t 3 direction [22]. Such modularity is a desirable property that can establish a relation to Gauss-Manin connection in disguise and may be extended to the group of transformations of Calabi-Yau modular forms [2,25].
5,614
2017-09-27T00:00:00.000
[ "Mathematics" ]
Topological Analysis of Magnetically Induced Current Densities in Strong Magnetic Fields Using Stagnation Graphs : Stagnation graphs provide a useful tool to analyze the main topological features of the often complicated vector field associated with magnetically induced currents. Previously, these graphs have been constructed using response quantities appropriate for modest applied magnetic fields. We present an implementation capable of producing these graphs in arbitrarily strong magnetic fields, using current-density-functional theory. This enables us to study how the topology of the current vector field changes with the strength and orientation of the applied magnetic field. Applications to CH 4 , C 2 H 2 and C 2 H 4 are presented. In each case, we consider molecular geometries optimized in the presence of the magnetic field. The stagnation graphs reveal subtle changes to this vector field where the symmetry of the molecule remains constant. However, when the electronic state and symmetry of the corresponding equilibrium geometry changes with increasing field strength, the changes to the stagnation graph are extensive. We expect that the approach presented here will be helpful in interpreting changes in molecular structure and bonding in the strong-field regime. The magnetically induced current is a relatively complicated vector field, and as such, tools for its analysis and interpretation of its main features in a simple manner are highly desirable. Approaches that analyze the induced currents by integration are well developed, see, for example, the GIMIC program [18,41]. Recently, we presented an implementation of these techniques in the context of the current-density-functional theory (CDFT), allowing applications to systems in arbitrarily strong magnetic fields [50]. Somewhat less attention has been given to topological approaches, which employ concepts from vector field analysis to analyze the topology of the vector field and provide insight into the magnetic behaviour of the system. A few groups have pursued a topological analysis following early work on the mathematical characterization of stagnation points in vector fields by Reyn [51]. In particular, see the work of Keith and Bader [26], as well as works by Pelloni, Faglioni, Zanasi and Lazzeretti [52][53][54][55] for applications to molecular systems. These studies demonstrate how the location and classification of stagnation points (points in space at which the current density j has a magnitude of zero, i.e., |j| = 0) to produce stagnation graphs can distill the main features of the complicated current vector field into simpler plots that can be easily visualized without suffering from issues, such as occlusion, that often make the direct visualization of dense 3D vector fields challenging. In the present work, we extend these techniques to analyze magnetically induced currents in strong magnetic fields at the CDFT level. This allows us to identify the main features and topology of induced currents as a function of the applied magnetic field strength and its direction relative to the molecular frame. In Section 2, we outline the necessary theoretical background to calculate the magnetically induced current densities in strong fields; detailing how the current density is determined in Section 2.1 and how its topological characteristics may be classified in Section 2.2. In Section 3, we describe the computational approach used to locate and classify stagnation points and construct stagnation graphs. The results are presented in Section 4 for applications to the CH 4 , C 2 H 2 and C 2 H 4 molecules at their ground state equilibrium geometries over a range of field strengths, obtained using a recently developed implementation of geometrical gradients for calculations using LAOs [56]. The changes in the stagnation plots with the applied field strength are carefully examined and analyzed in Section 5. Conclusions and directions for future work are discussed in Section 6. Magnetically Induced Current Densities In the presence of a uniform magnetic field B, the non-relativistic electronic Hamiltonian is given byĤ whereĤ 0 is the zero-field Hamiltonian,p the momentum operator (−ı∇),ŝ the spin operator and r O the position relative to some gauge-origin O. Since ∇ · B = 0 for a uniform magnetic field, the magnetic field may be represented by a vector potential A such that B = ∇ × A; in the Coulomb gauge, this vector potential is defined to have a divergence of zero, ∇ · A = 0. Therefore, for a uniform magnetic field, the vector potential depends on the gauge-origin as and a change of the position of the gauge-origin O → G is a gauge transformation, This gauge-transformation corresponds to a unitary transform of the Hamiltonian, the eigenfunctions of which, ψ, therefore undergo a compensating unitary transformation under which observables of the system remain gauge-origin invariant. This gauge-origin dependence of the wavefunction cannot be properly represented in a finite basis except by explicitly including the gauge-origin; this is the approach taken using LAOs [31], which comprise a standard Gaussian-type basis function ϕ a centred on R and multiplied by a field-dependent complex phase factor, which yields wavefunctions that are correct to first order in the magnetic field and properties that are gauge-origin invariant [57]. Utilizing an LAO basis, the effects of the magnetic field can be treated in a non-perturbative manner, allowing the behaviour of the systems in the magnetic fields of arbitrary strength to be examined [58,59]. The magnetically induced physical current density j is a continuous vector field in three dimensions and may be written as the sum of the diamagnetic current density j d and paramagnetic current density j p [59,60], where ρ σ is the σ-spin density, and φ iσ are the σ-spin occupied molecular orbitals. Through the non-perturbative inclusion of magnetic field effects, the magnetically induced current density can be evaluated without the need for linear response calculations; in a basis of LAOs, the one-particle density matrix D σ ab computed at the Hartree-Fock (HF) or CDFT levels [59][60][61] may be used to evaluate the diamagnetic and paramagnetic current densities as each of which are individually gauge-origin dependent; however, the physical current density j = j d + j p is gauge-origin invariant and can be computed at arbitrary field strengths. Topological Characteristics The magnetically induced current density j(r) is a three-dimensional vector field with a topological structure that may be characterized in terms of its singularities, otherwise referred to as stagnation points, at which |j| = 0. The collection of these points for a system with magnetically induced currents is its stagnation graph, which describes the topological structure of the vector field [6,36,37,51,62]. The Cartesian components of the current density j α (r) at position r around a stagnation point r 0 may be described by the second-order Taylor expansion in which the zeroth-order term is, by definition, zero. Taking only the first-order approximation, the current density in the region of the singular point can be described by the linear equation [54,[63][64][65][66][67]: where j is the current density at r, and J is the Jacobian matrix with elements J αβ comprising the first derivative of j α with respect to r β . It can be shown that the local behaviour of the current may be characterized by the eigenvalues η i of the Jacobian matrix [51]. The number of non-zero eigenvalues of J is denoted by the rank r, whilst the excess of eigenvalues with a positive real component over those with a negative real component is denoted the signature s -together the ordered pair (r, s) can be used to characterize the stagnation point [22,[52][53][54]64,65]. Given that, at points where the current density is zero, the identity ∇ · j = 0 must be satisfied, the only possible (r, s) combinations for a three-dimensional vector field are (3, ±1), (2, 0) and (0, 0). In addition, a topological index i may be defined at a stagnation point where J has two non-zero eigenvalues as The resulting classifications are summarized in Table 1 and will be used throughout this work [22,54,[65][66][67]. Computational Methods Previous works [22,65] have suggested using Newton-Raphson approaches to search for stagnation points, which occur at the nodes of the current density j. In particular, these optimization schemes only use first-order information to search the three-dimensional current-density vector field from a grid of arbitrarily chosen starting points [22]. Here, we present an alternative choice of the objective function in stagnation point searches in Section 3.1 before outlining an approach allowing for full second-order trust-region optimization, which benefits from quadratic convergence in the vicinity of stagnation points, in Section 3.2. Our approach to selecting an initial grid of starting points for the search and their subsequent refinement to produce clear stagnation graphs is detailed in Section 3.3. Selecting an Appropriate Objective Function The purpose of the stagnation point search is to locate the set of points {r} at which the objective function |j(r)| = j 2 x (r) + j 2 y (r) + j 2 z (r) is zero. Such points can be located by searching for stationary points in |j| and selecting those at which |j(r)| = 0. Previous works have suggested optimizing |j(r)| directly via the Newton-Raphson approach, which requires only evaluation of the objective function and its first derivative [22,65]. However, it is clear that |j(r)| will exhibit cusps at the stagnation points. To demonstrate this, we consider an example of the ethene molecule oriented in the yz-plane with the two carbon nuclei equidistant from the origin along the z-axis in Figure 1. Plotting |j| along the line −2.5 ≤ y ≤ 2.5 bohr at z = 2.3 bohr, we expect to observe three stagnation points in line with the plots of Ref. [54] when a field of 0.1 B 0 is applied parallel to the C-C bond axis (B 0 =he −1 a −2 0 = 2.3505 × 10 5 T). These points are clearly visible in Figure 1a; however, it can also be seen in Figure 1a that the expected cusps are present at the stagnation points along this line. The first derivative of Equation (14) may be readily evaluated as and the singularities associated with the factors 1/|j| at the stagnation points are clearly present in Figure 1b, with the total derivative of the objective function being discontinuous at these points. In practice, we find that, using this objective function, the cusps associated with the stagnation points can be approached to a sufficient proximity that the optimization can be terminated; however, the rate of convergence is somewhat hindered. A full second-order approach, in which the issues associated with the singularities of |j(r)| are avoided, may be straightforwardly formulated by considering at alternative objective function, |j(r)| 2 . Figure 2 shows plots of |j| 2 and its first derivative in the same region plotted for |j| and its derivatives in Figure 1; it can be seen that this objective function is continuous, and both the gradient and the Hessian of this function are well defined at the stagnation points. This objective function, its gradient and its Hessian may then be evaluated as The gradient components are displayed in Figure 2b and are well behaved as expected. Furthermore, the Hessian may be readily evaluated affording full second-order optimization at a modest cost. The required partial derivatives with respect to the position may be evaluated in terms of the LAOs and density matrices as These derivatives have been implemented in the QUEST program [23]. The correctness of each contribution was verified by finite-difference calculations with respect to r α . Optimization Algorithm To locate the stagnation points, |j| 2 was minimized with respect to r using a trust region approach. Using this method, a quadratic model of the objective function is constructed around each point visited in the optimization r k . Here, d defines the step taken from the point r k , and |j| 2 k , g k and H k are the objective function, its gradient and Hessian at r k , respectively, To ensure progress in the optimization, the step d is determined by solving the trustregion subproblem where ∆ k is the trust-radius. The step cannot exceed the trust region, in which the quadratic model is expected to be reliable. At each iteration, the accuracy of the step from the quadratic model is monitored using the ratio of the actual change in |j| 2 to that predicted by the quadratic model. If the step does not produce a sufficient decrease in |j| 2 , the trustradius is reduced. Alternatively, if the model is particularly accurate (as would be the case if it was close to a stationary point for example), then the trust-radius may be increased; if the model is reasonably accurate, the trust-radius is kept the same. This approach is detailed in Algorithm 1, where the ratio used to control the trust-radius is denoted γ k . At each step, we solve the trust-region subproblem Equation (24) using the Steihaug-Toint truncated conjugate gradient algorithm [68]. Algorithm 1 Trust Region optimization In practice, we observe rapid convergence from a wide range of starting points, with quadratic convergence in the local region. The optimization is terminated when the following convergence criteria are satisfied: the maximum value of the gradient ∇|j| 2 is 2 ×10 −6 au, its root-mean-square is 10 −6 au, and its Euclidian norm is 10 −8 au, the norm of the change in r is 3 × 10 −5 au and its maximum value 6 × 10 −4 au and the norm of the change in |j| 2 is 10 −8 au. This allows us to clearly distinguish the required stagnation points in the current-density vector field. Initial Point Selection A key aspect of making a stagnation point search computationally tractable is how the initial points for the optimizations are selected. Few details in this regard are discussed in previous studies of stagnation plots [22,65]. In the present work, the stagnation point search is carried out after a converged CDFT calculation from initial points defined using an atomcentred quadrature grid, with angular coordinates given by the eleventh degree Lebedev quadrature [69] and radial coordinates given by the LMG method [70] with threshold on the relative error of the radial integral of 10 −2 . This grid is of the type used in DFT calculations but is much sparser than would be required for a full numerical integration of quantities, such as the electron density and its gradient. It does, however, retain a similar structure to the full DFT integration grid, with more points found close to the nuclei and less points as the distance from the nuclei increases. This structure provides a set of starting points that are well placed to resolve the details of often more complex stagnation lines in the vicinity of nuclei whilst also sampling those further away. In the present work, we initiate stagnation point searches from this initial grid of points. Typically, plotting the converged points reveals the structure of the stagnation graph, but points may be relatively sparsely placed along the stagnation lines. In order to increase the number of stagnation points located, a second refinement stage is carried out. The path between stagnation points from the initial search that are separated by 1.5 bohr or less is divided into segments of approximately 0.1 bohr and a new initial point created at the midpoint of each segment. Once these points are generated, they are filtered to remove any points that are within 0.05 bohr of another point in the set. Optimizations are then carried out from these points to refine the stagnation graph. This strategy was effective at locating a larger number of singular points, particularly along stagnation lines, whilst minimizing the computational cost. Finally, since |j| 2 decays towards zero as we move away from the molecule, stagnation points in regions of negligible charge density (ρ < 10 −3 au) are discarded to leave only those of interest within the molecular volume. Results To test the efficiency of this new implementation and investigate how the current vector field topology and associated stagnation graph changes in the presence of strong magnetic fields, we study three small molecules: CH 4 , C 2 H 2 and C 2 H 4 . For each case, we consider fields of magnitude |B| = 0.0 − 0.2B 0 and optimize the molecular geometries using the analytic gradient implementation of Ref. [56] at the cTPSS/u-aug-cc-pVTZ level. Here, the prefix u-indicates that the basis set is used in its uncontracted form; uncontracted basis sets are used to provide greater flexibility to describe the response of the wavefunction to the magnetic field. The spherical-harmonic form of these Gaussian basis functions are used throughout. To aid the efficiency of the calculations density-fitting is used for all calculations, with the AutoAux auxiliary basis set. This autogenerated auxiliary basis set is generated following the approach outlined in Ref. [71] and provides a conservative auxiliary basis set constructed by considering the product space of the primary orbital basis. This choice helps to ensure that the results are consistent over a wide range of field strengths. Here, we note that the TZ quality basis set employed should be sufficiently accurate to describe systems in field strengths |B| ≤ 0.2B 0 -see, for example, Refs. [50,72] for discussion of this point. The stagnation plots are visualized along with the nuclear framework using the QMOLE tool, built using the Plotly python package [73], and are shown in the following subsections in static form; html files are provided as Supporting Information, which allow the reader to explore the plots in 3D using any modern web browser. CH 4 The energy and main geometrical parameters for the equilibrium structure of CH 4 , optimized in magnetic fields of |B| = 0.05B 0 , 0.10B 0 , 0.15B 0 and 0.20B 0 aligned in the lowest energy orientation parallel to one of the C-H bonds, are presented in Table 2. It can be seen that, as expected for a closed-shell molecule, the energy exhibits diamagnetic behaviour and increases with field strength. Table 2. Equilibrium geometries of CH 4 at the magnetic field strengths considered in this work, optimized with the cTPSS functional and in the u-aug-cc-pVTZ basis. In all cases, the magnetic field is aligned parallel to one of the C-H bonds. In the absence of a magnetic field, the equilibrium geometry of CH 4 has the familiar T d point group with all C-H bond lengths and H-C-H bond angles equal. Upon the application of a magnetic field, the point group symmetry of the molecule becomes that of the molecule and magnetic field combined; this usually lowers the symmetry of the system since, of the symmetry operations at zero field, only rotation axes parallel to the field, mirror planes perpendicular to the field and inversion centres remain [74]. In the case of CH 4 with a magnetic field applied parallel to one of the C-H bonds, the only symmetry element remaining is the three-fold rotation axis parallel to the field; hence, the point group is reduced to C 3 . |B| / B 0 Energy / E h R C−H / bohr H-C-H / Degree It can be seen in Table 2 that, with an applied magnetic field, the optimized geometry distorts away from the tetrahedral structure to a lower symmetry arrangement; the length of the C-H bond along the C 3 axis becomes distinct from that of the other bonds. Similarly, the H-C-H angles involving the axial H become distinct from those involving only nonaxial H atoms. In Table 2, these quantities are reported in pairs, with the first value corresponding to the axial case and the second the non-axial case. It can be seen that the axial C-H bond becomes compressed with increasing field strength, reducing in length by ∼0.5 pm at |B| = 0.20B 0 relative to zero field. The trend for the non-axial bonds is less clear since their length remains approximately constant over this range of field strengths. In addition, the axial H-C-H angles become slightly more acute and the non-axial H-C-H angles become slightly more obtuse with increasing field strength, with the non-axial H-C-H angles being around 1.1 • smaller than the axial H-C-H angles at |B| = 0.2B 0 . The CH 4 stagnation plots in the range |B| = 0.05 − 0.2B 0 are shown in Figure 3. The stagnation plot at |B| = 0.05B 0 closely resembles the plot presented in Ref. [54], with saddle lines coloured blue, para-and dia-tropic vortex lines coloured red and green, respectively, and branching points coloured purple. The latter are located close to the C 3 axis as expected. The similarity with Ref. [54] confirms the accuracy of the present implementation for locating the stagnation points and also that the classifications in Table 1 are correctly implemented. In general, the structures of the stagnation plots display only minor changes with increasing field strength in this range. However, two trends with increasing field strength may be observed; the outer saddle and diatropic vortex lines dilate to have a slightly larger radius around the carbon atom, whilst simultaneously, the inner paratropic vortex and saddle lines contract towards the carbon atom with increasing field strength. The stagnation plots reflect the fact that, as the field strength increases, the induced current becomes more intense and compact around the carbon atom. C 2 H 2 In Table 3, the energies and main geometrical parameters for the equilibrium structures of C 2 H 2 , optimized in magnetic fields of |B| = 0.05B 0 , 0.10B 0 , 0.15B 0 and 0.20B 0 aligned in the lowest energy orientation perpendicular to the C-C bond, are presented. Over the range of fields considered, the ground state of the molecule changes from that with M s = 0 to M s = −2; this occurs at a field strength of around 0.12B 0 . Consistent with Ref. [75], the equilibrium geometry of C 2 H 2 in the lowest energy triplet state at zero field has a cis structure, with both hydrogen atoms on the same side of the C-C bond. However, at a field strength of around 0.10B 0 , the trans structure in which the two hydrogen atoms are on either side of the C-C bond becomes lower in energy than the cis structure; hence, the M s = −2 state has a trans equilibrium geometry at |B| > 0.1B 0 , where it is the ground state. The energies of the optimized M s = 0, cis-M s = −2 and trans-M s = −2 states are summarized in Figure 4. Table 3. Equilibrium geometries of C 2 H 2 at the magnetic field strengths considered in this work, optimized with the cTPSS functional and in the u-aug-cc-pVTZ basis. In all cases, the magnetic field is aligned perpendicular to the C-C bond. The equilibrium geometry of the M s = 0 state of C 2 H 2 remains linear whilst it is the ground state; however, the presence of the magnetic field leads to a reduction in symmetry from D ∞h to C 2h since only the two-fold rotation axis parallel to the field, the mirror plane perpendicular to the field and the inversion centre are retained. It can be seen in Table 3 that, as the field strength increases, both the C-C and C-H bonds contract slightly. In comparison, the trans-M s = −2 state has an electronic configuration in which an α-spin electron in a bonding orbital has been excited to an anti-bonding orbital and undergone a spin flip to a β-spin electron. In the stronger fields considered here, the favorable interaction of the unpaired β-electrons with the external magnetic field is sufficient to make the M s = −2 states lower in energy than the M s = 0 state, whilst in the stronger fields, the trans conformation is favored over the cis. Consistent with these changes in occupation, the C-C bond lengthens significantly, and the C-H bonds also lengthen but to a lesser extent. The H-C-C bond angle becomes 118.5 • at 0.15 B 0 and becomes more acute with the increasing field. In the presence of the field, the trans-M s = −2 state adopts an orientation such that the field lies in the molecular plane, and the point group symmetry of the system is reduced from C 2h in the absence of a field to C i . |B| / B 0 Energy / E h R C−C / bohr R C−H / bohr H-C-C / Degree The stagnation plots for C 2 H 2 are presented in Figure 5. The plot at |B| = 0.05B 0 exhibits the same features as that obtained from response calculations in Ref. [76]. In addition to the classes of the stagnation point identified for CH 4 , isolated saddle nodes are visible in the plane containing the internuclear axis and perpendicular to the applied field. The stagnation graph at 0.10B 0 has a very similar structure to that at 0.05B 0 . The picture at 0.15B 0 and 0.20B 0 is entirely different since the stagnation graph of the ground state at this field strength contains no stagnation lines but only isolated paratropic vortices and saddle points. These features will be discussed further in Section 5. To illustrate how the stagnation plots neatly summarize the topology of the magnetically induced current vector field, contour plots of |j| in the xz and yz planes of Figure 5b are presented in Figure 6. On the left, the darkest blue features in the xz-plane, representing the smallest |j|, show where the stagnation lines are located. The stagnation line perpendicular to the C-C bond and passing through its midpoint and those that form loops encircling the nuclei can both be seen in Figure 6a. On the right, the darkest blue features in the yz-plane capture the central diatropic vortex line displayed as a ring around the C-C bond midpoint in Figure 5. In addition, the points intersecting the bond axis are clearly represented, along with the four isolated saddle node stagnation points. This demonstrates that the stagnation plots accurately and succinctly capture the main features of the topology of the complicated vector field associated with the magnetically induced current. The energy and main geometrical parameters for the equilibrium structure of C 2 H 4 , optimized in magnetic fields of |B| = 0.05B 0 , 0.10B 0 , 0.15B 0 and 0.20B 0 aligned in the lowest energy orientation parallel to the C-C bond, are presented in Table 4. In this orientation, the zero-field point group of D 2h is reduced to C 2h in a magnetic field since only the two-fold rotation axis along the C-C bond parallel to the field, the mirror plane perpendicular to the field and the centre of inversion remain. Consistent with this symmetry, all of the C-H bonds remain equivalent even with increasing field strength. Between |B| = 0.05B 0 and 0.20B 0 , the C-C and C-H bonds are compressed by ∼0.3 and ∼0.7 pm, respectively, whilst at the same time, the H-C-H bond angles become more acute, reducing by 5 • over this range of field strengths. Table 4. Equilibrium geometries of C 2 H 4 at the magnetic field strengths considered in this work, optimized with the cTPSS functional and in the u-aug-cc-pVTZ basis. In all cases, the magnetic field is aligned parallel to the C-C bond. The stagnation plots for C 2 H 4 are presented in Figure 7. As for CH 4 , the plot at |B| = 0.05B 0 exhibits the same features as that obtained from response calculations in Ref. [54]. The structure of the stagnation graph remains similar for all the field strengths considered here; as such, only that at |B| = 0.05B 0 is presented in Figure 7. Figure 7. The stagnation graph of the C 2 H 4 molecule in its equilibrium geometry with a magnetic field of 0.05B 0 along the z-axis. The interactive version of this figure may be found in C2H4_B005.html of the Supporting Information. Discussion Stagnation graphs of the kind presented in Figures 3, 5 and 7 provide convenient spatial descriptions of the magnetically induced current densities in molecules, with which various magnetic properties can be predicted. Analysis of the stagnation graphs obtained by response calculations for CH 4 , C 2 H 2 and C 2 H 4 has been presented in Refs. [54,76]. We now examine how this analysis changes over the range of fields considered in the present work. In the diamagnetic CH 4 molecule, the current flow around the edges of the molecule in the low-density regions is diatropic and perpendicular to B. At the centre of these rings of diatropic current flow is the C 3 rotation axis of the molecule and along which lies a diatropic vortex line. Approaching regions of the molecule in which the charge density is greater, the structure of the magnetically induced current density becomes more complex with multiple individual circulations or toroidal vortices. At the centre of each lies a paratropic or diatropic vortex line for paratropic or diatropic vortices, respectively; these branch from the axial vortex line above the carbon atom and converge to the axial vortex line below the carbon atom such that they form closed loops. In the same region, saddle lines form a closed loop between the branching points on the axial vortex line and lie at the points of zero current flow between adjacent vortices. As described in Section 4, it can be seen in Figure 3 that the structure of the stagnation graph in CH 4 remains generally similar with increasing magnetic field strength; however, the outer ring of saddle and diatropic vortex lines around the carbon atom dilate, whilst the inner ring of saddle and paratropic vortex lines contract with increasing field strength. This may be confirmed by considering the current densities at different field strengths; streamlines of the current in the xy plane perpendicular to the magnetic field and at a height of z = −0.2 bohr relative to the carbon atom at the origin in CH 4 at |B| = 0.1B 0 and 0.2B 0 are shown in Figure 8a,b, respectively. At the higher field strength, the magnitude of the current flow around the toroidal vortices becomes larger, resulting in the diatropic stagnation lines moving further from the nucleus. At the same time, the paratropic vortices are compressed towards the nucleus; hence the paratropic vortex lines moving towards the nucleus. As presented in Section 4, the ground state of the C 2 H 2 molecule changes between |B| = 0.10B 0 and 0.15B 0 from M s = 0 to M s = −2 and the equilibrium geometry changes from a linear structure to adopt a trans conformation. The stagnation graph changes completely, with only a few isolated stagnation points remaining. The pattern of stagnation points at this geometry and field strength may be understood by considering the magnitude of the current density in the plane of the molecule; this is presented in Figure 9a, where the stagnation points can be clearly identified. Since the M s = −2 state of C 2 H 2 is not closed shell and has a different number of α and β electrons, the current density of each is not necessarily the same. This can be seen clearly in Figure 9b,c, depicting the norm of the α and β current densities, respectively. In the α case, a line of zero current density loops between the two carbon nuclei and around each, whilst in the β case, separate lines of zero current density appear to encircle each of the carbon nuclei and a line of zero current density at the midpoint of the C-C bond parallel to the magnetic field is visible. The overall stagnation graph describes points where the magnitude of the total current density is zero; since this is a non-negative quantity by definition, at each stagnation point, the magnitudes of both the α and β current densities must vanish. The total stagnation graph, therefore, represents the intersection of the sets of stagnation points that would be obtained for the α and β spin currents independently. Since the topology of the α and β spin currents are significantly different, the intersection of their stagnation points results in a small number of points, which are visible in Figure 9a and located in Figure 5c,d. For C 2 H 4 , the stagnation graph exhibits very little change between |B| = 0.05B 0 and 0.20B 0 . This may be rationalized by noting that the symmetry of the molecule does not change with increasing field strength in the range studied here. The relationship between a molecule's symmetry elements, particularly mirror planes, and its stagnation graph has been discussed in detail in Refs. [52][53][54], where it is shown that the presence and position of stagnation lines can be determined from mirror planes. Whilst the equilibrium C-C and C-H bond lengths and the H-C-H bond angles change with increasing field strength, the symmetry of the molecule remains constant, and as such, the features present in the stagnation graph remain the same, notwithstanding distortions with increasing field strength similar to those in CH 4 . Previous studies of stagnation graphs have mainly focused on small molecules with high symmetry and have used first-order methods to determine the location of stagnation points [22,[52][53][54]65]. We expect that the second-order optimization approach for locating the stagnation points presented here should allow stagnation graphs to be mapped-out efficiently for more complex systems. As demonstrated with C 2 H 2 , this approach allows the changes in the stagnation graph to be examined as the symmetry of the molecule and its state changes, which is expected to become essential to study systems in stronger magnetic fields [56]. Conclusions A second-order optimization method for mapping out the stagnation graphs of molecular systems has been presented. In this approach, stagnation points are located by minimizing |j| 2 , which, in contrast to |j|, has well-defined first and second derivatives, enabling the stagnation graph to be elucidated efficiently and robustly for general systems. In contrast to previous work in this area [52][53][54]76], the magnetic field effects are here treated in a non-perturbative manner, allowing stagnation graphs to be computed at arbitrary field strengths and the effect of varying field strength on the characteristics of the stagnation graphs to be examined. Furthermore, the changes in the stagnation graph arising due to the effect of the magnetic field on the equilibrium geometry of the molecules has been accounted for by optimizing the molecular geometries at each field strength using the implementation of Ref. [56] before computing the stagnation graphs. This approach has been applied to study the stagnation graphs of three small molecules: CH 4 , C 2 H 2 and C 2 H 4 , which have previously been considered using response calculations [54,76], across a range of magnetic field strengths. In weak fields, the results obtained with the present approach are consistent with these earlier results, indicating the reliability of the implementation. Upon increasing the field strength, we observe that the extent to which the stagnation graphs change depends strongly on the how the symmetry of the molecular structure and electronic state are affected by the magnetic field. In cases where the electronic state and molecular symmetry remain the same as in the absence of a field, only subtle changes to the stagnation graphs are observed, such as contractions and dilations of the radii of closed stagnation line loops. These changes can be explained by considering the magnitude of the current densities, which generally increase with increasing field strength, increasing the radii of toroidal vortices and increasing the distance between their corresponding vortical stagnation lines. In cases where increasing the field strength results in a change in the electronic state and an accompanying change in the symmetry of the molecular geometry, much more extensive changes to the stagnation graph are observed. For example, the change in the ground state of C 2 H 2 from M s = 0 to M s = −2 at increasing field strength completely alters the molecular structure, symmetry and hence the stagnation graph. The structure of the stagnation graph was rationalized by considering the individual α and β spin current densities, with the stagnation points for the total current being the intersection of the stagnation points for the α and β spin currents individually. We expect that the second-order approach for determining stagnation plots presented in this work will become a useful tool for understanding the electronic structure of more complex molecules in strong magnetic fields. The present implementation allows flexibility to study stagnation graphs for a wide range of uniform magnetic fields in a nonperturbative manner and to resolve their α and β-spin contributions. In the present work, only small molecules with up to two carbon atoms have been considered; in later work, we will apply this method to study current densities in more diverse molecular systems, such as homo-and heterocyclic aromatic molecules and aromatic systems, for which the stagnation graphs are known to show a wider range of features [66]. In future work, we will also consider the generalization of this approach to non-uniform magnetic fields, as described in Ref. [77]. As noted by Pelloni and Lazzeretti [76], the interaction between toroidal vortices and the gradient of non-uniform magnetic fields can be examined with the aid of stagnation graphs, and these may provide useful insight into these effects on nuclear magnetic shielding as well as more exotic properties, such as molecular anapole moments.
8,734.6
2021-08-26T00:00:00.000
[ "Physics" ]
Modular-invariant large-$N$ completion of an integrated correlator in $\mathcal{N}=4$ supersymmetric Yang-Mills theory The use of supersymmetric localisation has recently led to modular covariant expressions for certain integrated correlators of half-BPS operators in $\mathcal{N} = 4$ supersymmetric Yang-Mills theory with a general classical gauge group $G_N$. Here we determine generating functions that encode such integrated correlators for any classical gauge group and provide a proof of previous conjectured formulae. This gives a systematic understanding of the relation between properties of these correlators at finite $N$ and their expansions at large $N$. In particular, it determines a duality-invariant non-perturbative completion of the large-$N$ expansion in terms of a sum of novel non-holomorphic modular functions. These functions are exponentially suppressed at large $N$ and have the form of a sum of contributions from coincident $(p, q)$-string world-sheet instantons. Introduction and outline The exact expressions for certain integrated correlators of four superconformal primary operators of the stress tensor multiples in N = 4 supersymmetric Yang-Mills (SYM) theory with gauge group G N can be determined by localisation in terms of the partition function, Z GN (τ,τ , m), of N = 2 * SYM that was derived by Pestun [1] in terms of the Nekrasov partition function [2]. The N = 2 * theory reduces to the N = 4 theory in the limit m → 0, where the parameter m is the hypermuitiplet mass and Z GN (τ,τ , m)| m=0 = 1. In [3] the expression was argued to be proportional to an integrated correlator of four superconformal stress-tensor primaries with a specific integration measure. The quantity ∆ τ := 4τ 2 2 ∂ τ ∂τ is the laplacian on the hyperbolic plane parametrised by the coupling constant τ = τ 1 + iτ 2 := θ 2π + i 4π g 2 YM , with θ the theta angle and g YM the Yang-Mills coupling constant. The first few terms in the large-N expansion of C GN (τ,τ ) in the 't Hooft limit (in which λ = g 2 YM N is fixed) for SU (N ) gauge group were studied in an expansion in powers of 1/λ in [3] and similarly for general classical groups in [4]. As shown in [5], the coefficients in the perturbative 1/N expansion at fixed τ are modular functions that make SL(2, Z) Montonen-Olive duality [6] (also known as S-duality) manifest. These coefficients are sums of non-holomorphic Eisenstein series with half-integer index. The perturbative pieces of the correlator can be extracted relatively easily from the localised expression for Z GN (τ,τ , m) for any value 1 of N [15,16,4], but extracting the explicit instanton contributions that are contained in the Nekrasov partition function is more involved. However, in [17,18,19] a novel expression for the integrated correlator was proposed that is valid for any classical gauge group G N and finite τ . where we have defined the quantity Y mn (τ,τ ) := π |m + nτ | 2 τ 2 . (1. 3) The coefficient functions B 1 GN (t) and B 2 GN (t) are rational functions of the following form, where i = 1, 2, n i GN is an integer and Q i GN (t) is a degree n i GN − 2 polynomial with the "palindromic" property Q i GN (t) = t n i G N −1 Q i GN (t −1 ). For simply-laced groups G N = SU (N ), SO(2N ) the correlators are expected to be invariant under the SL(2, Z) action τ → γ · τ = aτ +b cτ +d with γ = a b c d ∈ SL(2, Z), which is a consequence of Montonen-Olive duality. In these cases, B 2 GN (t) = 0 and only B 1 GN (t) is non-trivial, 3 and (1.2) is manifestly invariant under SL (2, Z). For the non simply-laced classical groups G N = U Sp(2N ), SO(2N + 1) (1.2) is only invariant under the congruence subgroup Γ 0 (2) ⊂ SL(2, Z). 4 Furthermore B i GN (t) obeys the following relations which make Goddard-Nuyts-Olive (GNO) duality [24] of (1.2) manifest. For example, for the SU (N ) theory, it was conjectured that [17,18] and expressed in terms of Jacobi polynomials P where ∆ τ = τ 2 2 (∂ 2 τ1 + ∂ 2 τ2 ) is the SL(2, Z)-invariant hyperbolic laplacian and c SU(N ) = (N 2 − 1)/4 is the central charge. Upon iteration, this equation relates the integrated correlator for the theory with gauge group SU (N ) to the integrated correlator for the SU (2) theory (with the boundary condition C SU(1) (τ,τ ) = 0). Similar Laplace difference equations were also obtained for the integrated correlators with general classical gauge group G N [19], with the result that all C GN (τ,τ ) are determined in terms of C SU(2) (τ,τ ). While it is much easier to analyse the dependence of the integrated correlator on the parameters τ and N starting from (1.2) than from the original expression (1.1), the dependence on N is not transparent. This will be remedied in the present paper, in which we will take the further step of introducing a generating function for the N -dependence. This generating function is defined as (1.9) 3 To simplify the notation, for these cases we will simply drop the superscript "1" and write where the subscript G indicates that this generates the integrated correlator for the G N gauge group for all values of N . The expression (1.9) may be inverted to give , (1.10) where C denotes a contour encircling the pole at z = 0 in an anti-clockwise direction and not encircling other singularities. From (1.2) we can equivalently define the generating functions for the rational functions and hence introduce (1.12) One of the advantages of introducing a generating function such as C i G (z; τ,τ ) is that it has a much simpler form than C i GN (τ,τ ). This makes C i G (z; τ,τ ) extremely convenient for analysing the large-N properties of the integrated correlators. In section 2 we will determine the generating function B SU (z; t) (which generates B SU(N ) (t) for all N ) by relating the integrated correlator to hermitian matrix model integrals and, in particular, this will lead to a proof of the previously conjectured expression (1.6). The proof relies on deriving the Laplace difference equation (1.8) from the hermitian matrix model and utilising the explicit result of the SU (2) correlator in [18], which is the initial condition for the recursion relation (1.8). Furthermore, we will show that the generating function B SU (z; t) satisfies a second order partial differential equation which leads to the Laplace difference equation (1.8). The generating function will streamline the analysis of properties of the integrated correlator in different regions of parameter space by distorting the integration contour C in different ways. Some relevant properties of the hermitian matrix model and its connection to the integrated correlator with SU (N ) gauge group are summarised in appendix A. As will be demonstrated in section 3, the generating functions C SU (z; τ,τ ) and B SU (z; t) lead to an efficient procedure for determining the large-N behaviour of C SU(N ) (τ,τ ). This is not only a more efficient procedure for determining results that were previously derived in [18] but also leads to new results. We will see that the large-N expansion consists of three pieces. The first is a term proportional to N 2 with a constant coefficient. The second piece is an infinite power series in half-integer powers of 1/N with coefficients that are sums of half-integer non-holomorphic Eisenstein series that depend on τ,τ . These two pieces simply reproduce the previously determined behaviour of the integrated correlator. The third novel piece is non-perturbative in N in the large-N limit and has a leading term proportional to the modular invariant function Whereas the power behaved terms in the 1/N expansion holographically correspond to the α ′ -expansion of type IIB string amplitudes in AdS 5 × S 5 , the exponentially suppressed terms displayed in the above equation are related to a sum of (p, q)-string instantons (i.e. euclidean (p, q)-string world-sheets wrapping a two dimensional manifold). The complete non-perturbative contribution with leading behaviour (1.13) is given by a sum of new non-holomorphic modular functions, D N (s; τ,τ ), which are generalisations of nonholomorphic Eisenstein series that are exponentially suppressed at large N and fixed τ . Some properties of these functions are discussed in appendix B. The formal sum of the asymptotic power series expansion in half-integer powers of 1/N and these novel non-perturbative terms provides the complete large-N transseries expansion of the integrated correlator C SU(N ) (τ,τ ). The Borel-Ecalle resummation of this transseries produces a well-defined and unambiguous analytic continuation for all values of N . In particular it coincides with the finite N ∈ N results. The presence of a third, non-perturbative, piece was previously arrived at in the 't Hooft limit, in which N → ∞ with λ = g 2 YM N fixed, by use of a resurgence argument based on the non-summability of the large-λ expansion [18,25,26]. In section 4 we demonstrate how the SL(2, Z)-invariant expression for the non-perturbative contribution reduces to such an expression in a suitable limit. We also consider the same non-perturbative terms in the regimes λ = O(N 2 ), or λ = O(1), where a different picture emerges and we retrieve the large-N non-perturbative expansions obtained in [26] by exploiting resurgence arguments for the large genus behaviour of the perturbative genus expansion. When λ = O(N 2 ) the non-perturbative terms take a form that resembles the effects of electric D3-branes that arise in [27] in the holographic description of Wilson loops. When λ = O(1) the non-perturbative contributions resemble the magnetic D3-branes also discussed in that reference. Details of these expressions will be given in appendix C. The generating functions for the integrated correlators with gauge group SO(n) are derived in section 5, and analogous large-N properties are found for these more general gauge groups. In particular, the integrated correlator for the theory with simply-laced group SO(2N ), again receives (p, q)-string instanton corrections. For the non simply-laced gauge group SO(2N + 1) there are not only SL(2, Z)-invariant contributions from (p, q)-string world-sheet instantons, but also Γ 0 (2)-invariant contributions from (p, 2q)-string instantons. This restriction is due to the fact that N = 4 SYM with gauge group SO(2N + 1) is only S-duality invariant under a congruence subgroup of SL(2, Z), namely Γ 0 (2). The same statements apply to the integrated correlator with gauge group U Sp(2N ), which is related to the SO(2N + 1) case by GNO duality. A detailed description of the derivation of the generating function for the integrated correlator with gauge group SO(n) is given in appendix D. A generating function for all SU (N ) The form of the function B SU(N ) (t) given in (1.6) and (1.7) was conjectured in [18,19] based on the analysis of the perturbative part of C SU(N ) (τ,τ ) (1.1) and the explicit evaluation of a variety of non-perturbative instanton contributions for a wide range of values of N . In this section we will first determine the generating function for SU (N ) starting from the conjectural functional form (1.6)-(1.7) of B SU(N ) (t). We will then prove that the same generating function can be derived from properties of correlation functions in N × N hermitian matrix models. To begin with we will determine various properties of the relevant generating functions. The function C SU (z; τ,τ ) (1.9) can be obtained by substituting the expression for B SU(N ) (t) (1.7) into B SU (z; t) := ∞ N =1 B SU(N ) (t)z N and making use of the generating function for Jacobi polynomials, which leads to For future reference we notice that the generating function (2.2) has a branch-cut located on the interval z ∈ 1, (t+1) 2 (t−1) 2 . This generating function satisfies several properties of note, The first of these equations is an inversion relation that follows automatically from the lattice sum definition of the integrated correlator (1.2), as was pointed out in [25] where the lattice sum was re-expressed in terms of a modular invariant spectral representation. The second equation in (2.4) is an inversion relation in the variable z which relates the SU (N ) correlator with coupling g 2 YM to the SU (−N ) correlator with coupling −g 2 YM , as previously discussed in [19]. The Laplace difference equation (1.8) satisfied by the integrated correlator C SU(N ) (τ,τ ) translates into a partial differential equation in (z, t) for B SU (z; t), Derivation from the hermitian matrix model We will now prove that the generating function is indeed given by the conjectured form (2.2), and that the integrated correlator, C GN (τ,τ ), satisfies the Laplace difference equation (1.8), using properties of correlation functions in the N ×N hermitian matrix model. Our procedure will be based on the analysis of the perturbative contribution to the integrated correlator C GN (τ,τ ). It was argued in [18,19] (see also [25]) that once the correlator is known to have the form in (1.2), the functions B i GN (t) are completely determined by the perturbative contributions to C GN (τ,τ ). Given complete knowledge of C SU(2) (τ,τ ) [18], this will uniquely determine the SL(2, Z) invariant form for C GN (τ,τ ) . Although we will continue to present the SU (N ) case in this section, the results extend straightforwardly to a general classical gauge group, G N . As shown in [19], the perturbative terms lead to the relation where the function I SU(N ) (x) is defined in (A.12) and determines the perturbative part of C pert SU(N ) (τ,τ ) via (A.10). The generating function B SU (z; t) can then be expressed as As explained in appendix A, I SU(N ) (x) can be obtained from a specific combination of one-point and two-point correlation functions in the N ×N hermitian matrix model [15], as reviewed in (A.11). Using this relation and the expression for the generating functions of the hermitian matrix model correlators given in (A.8) and (A.9), we have where the first line is the contribution arising from the matrix model two-point function and the second is from the square of the one-point function. Upon performing the u 2 contour integral around u 2 = 0 and u 2 = z/u 1 the result is . (2.10) Substituting the above expression for I SU (z; x) in (2.8) and performing the x integral leads to an expression that has a pole at t(u 1 − 1)(u 1 − z) + u 1 (1 − z) = 0. The u 1 contour integral picks up the residue at this pole, and gives the previously conjectured expression (2.2). The above argument provides a proof that B SU (z; t) obeys the differential equation (2.5). This proof was based on an analysis of the perturbative terms in the localised integrated correlator (these are the terms that arise from the one-loop determinant in Pestun's analysis [1]). This automatically ensures that the perturbative part of C SU(N ) (τ,τ ) satisfies the Laplace difference equation (1.8) with the Laplace operator replaced by τ 2 2 ∂ 2 τ2 , since perturbation theory is independent of τ 1 . Invariance under SL(2, Z) is restored in a unique manner by simply extending the differential operator τ 2 2 ∂ 2 τ2 to the Casimir operator ∆ τ = τ 2 2 (∂ 2 τ1 + ∂ 2 τ2 ) = 4τ 2 2 ∂ τ ∂τ , and the perturbative Laplace difference equation leads to (1.8). Importantly, in [18] it was explicitly verified that the initial term C SU(2) (τ,τ ) is indeed given by (1.2). Therefore, with this initial condition and the recursion relation (1.8), the SL(2, Z) invariant expression (1.2) for C SU(N ) (τ,τ ) follows. Fourier mode decomposition of the generating function We will now comment on some properties of the generating function, in particular its Fourier expansion with respect to τ 1 . This is obtained by performing a Poisson resummation that transforms the sum over m in (2.3) into a sum overm, 5 giving In order to analyse the large-N expansion it will prove useful to transform (2.11) from an integral over t in the range (0, ∞) to the range (1, ∞). For this purpose we split the t integral into the domains t ∈ (0, 1) and t ∈ (1, ∞). The change of variables t → t −1 maps the (0, 1) interval into (1, ∞). We may then rewrite this integral using the inversion property , together with the interchange (m, n) → (n,m). This results in the following Fourier series, where the second line follows from a Poisson resummation back to the original variables (m, n) → (m, n). For future reference, we note that the zero Fourier mode of C SU (τ,τ ) is given by the sum of two kinds of terms. The first kind consists of a sum of terms in the second line of (2.12) with m = ℓ ∈ Z, n = 0. The second kind consists of a sum of terms withm = 0, n = ℓ ∈ Z in the first line of (2.12), which is equivalent to a sum over all m ∈ Z and n = ℓ ∈ Z with ℓ = 0. The resulting zero mode can be expressed as where we have used the property Figure 1: (a) The contour C encircling the pole at z = 0. (b) The distorted contour C ′ encircles the cut, together with the contour at infinity, C ∞ , which gives a vanishing contribution. We also note We will return to these expressions when we consider the large-N expansion in the next two sections. Large-N expansion at fixed τ The large-N expansion of the integrated four-point correlator has a close relation to the α ′ -expansion of the integrated four-graviton amplitude in AdS 5 × S 5 . These properties were elucidated in [3,5,16] where the 1/N expansion was considered in both the 't Hooft limit (in which λ = g 2 YM N is fixed) and in the limit in which g YM is fixed. In this section we will see that the large-N expansion of C SU(N ) (τ,τ ) is streamlined when expressed in terms of the integral representation given by the contour integral as follows from (1.10) and the second line of (2.12). Furthermore, we will see that this provides a natural procedure for determining the non-perturbative terms that complete the 1/N expansion. We proceed by splitting the sum in (3.1) into the (m, n) = (0, 0) component and the rest where the constant term c 1 (N ) is given in (2.15). The integration contour C, shown in figure 1(a), can be distorted into the sum of the contour at infinity, C ∞ , together with the contour C ′ surrounding the branch cut, as shown in figure 1(b). The contour at infinity does not contribute since B SU (z; t) = O(1/z 2 ) as |z| → ∞. The resulting integral is given by where the new contour of integration C ′ is a clockwise contour surrounding the branch-cut located on the interval z ∈ [1, (t+1) 2 (t−1) 2 ]. The discontinuity across the branch-cut can easily be computed Although we would like to write the discontinuity is not quite an integrable function due to the end-point singularities (z−1) However if we regularise the integral by replacing the singular end-point factors by (z − 1) −α and (z 1 − z) −β and then take the limit α → 3/2 and β → 7/2 after integration, the result is This expression is perfectly regular and it is straightforward to check that it reproduces the correct answer since it is identical to (1.6) for any value of N ∈ N. Saddle-point analysis In order to analyse the behaviour of C SU(N ) (τ,τ ) at large N we will find it convenient to consider the distinct contributions of different regions of the z integration. We will see that the region near the endpoint z ∼ 1 produces the series of terms that are perturbative in 1/N , while the region near the endpoint z ∼ z 1 produces terms that are exponentially suppressed in N . To see this it is convenient to perform the change of variables z = e µ and consider We will separate this integral into the sum of two pieces by writing where the superscripts P and NP indicate the pieces that contain terms perturbative and non-perturbative in 1/N in the large-N regime. Note that the discontinuity DiscB SU (z; t) in (3.4) itself has a discontinuity along the interval z ∈ [z 1 , ∞) with z 1 given in (3.6). For this reason, when we extend the upper limit of the domain of integration from , we have to specify on which side of the branch cut we are integrating. Equation (3.9) can be thought of as a lateral Borel resummation with respect to the complexified parameter N , where the integrand DiscB SU (e µ ; t)/(2πi) plays the rôle of Borel transform for the large-N perturbative expansion (see e.g. [28] for a recent introduction to resurgence). The ±iǫ deformation in the upper limit of (3.9) will become irrelevant when we only consider the formal, asymptotic large-N expansion of (3.9). As expected from resurgence theory, both lateral resummations defined in (3.9) will give rise to the same formal asymptotic power series in N . However, precisely due to the branch-cut singularity of the integrand DiscB SU (e µ ; t)/(2πi), there is an "ambiguity" in resummation of the perturbative expansion, reflected by the ±iǫ in (3.9). This ambiguity in resummation has to be compensated by the non-perturbative corrections which are fully encoded in (3.10), which in resurgence language can be related to the Stokes automorphism. From a resurgence point of view, when we correlate the resummation of the perturbative expansion, (3.9), with the corresponding non-perturbative corrections, (3.10), we obtain the unambiguous and exact result (3.8), usually called median resummation. As will become clear shortly, this is a manifestation of the similar ambiguity and median resummation in the 't Hooft coupling resurgent expansion described in [18]. Let us first focus on re-deriving the large-N perturbative expansion, previously derived in [18,17]. Rescaling µ → µ/N and then expanding DiscB SU (e µ N ; t) around the point µ/N = 0 (i.e. near z = 1) leads to (3.11) We now substitute this expansion into the integral for B P SU(N ) (t) (3.9) taking care to make the modification µ − 3 2 → µ −α , as explained earlier. Setting α = 3 2 after performing the µ integral we obtain which agrees precisely with equation (5.56) in [18] (allowing for a factor of 2 change in our normalisation conventions). Upon substituting this asymptotic expansion into (3.2) we see that the constant term c 1 (N ), defined in (2.15), combines with a Dirichlet regularisation for the lattice-sum in the zero-mode sector, thereby reproducing the correct constant N 2 /4. What is left leads to an infinite series of terms that contribute to the large-N perturbative terms of C SU(N ) (τ,τ ). These remaining non-constant terms are power behaved in 1/N and with coefficients given by finite rational sums of non-holomorphic Eisenstein series of half-integral index, reproducing the results of [18,17]: 6 where δ r = 0 for even r and δ r = 1 for odd r and the coefficients b r,m can be found in those references, or can be obtained from B P SU(N ) (t) as expanded in (3.12). As previously mentioned, the large-N expansion of B P SU(N ) (t) produces a purely perturbative yet formal, asymptotic power series at large N and it is insensitive to the ±iǫ deformation of the contour of integration (3.9). The ambiguity in resumming (3.12) to (3.9), or equivalently in resumming (3.13), is compensated by the change in non-perturbative corrections exponentially suppressed in N at large N and fully captured by (3.10). This may be exhibited as follows, Substituting this expression into (3.3) leads to a t integral that is dominated by a saddle point when , and h(N, t) contains only power-behaved terms at large N . The integrand has two saddle points located at A simple thimble analysis shows that only the saddle t ⋆ 1 is connected with the contour of integration of interest. 7 The "on-shell" expression for such a saddle is given by where the function N A(x) is the saddle-point action and A(x) is given by This expression is identical to that recently found in a resurgence analysis of the Fourier zero mode of the integrated correlator in [26]. As pointed out in that paper, the function A(x) coincides with the D3-brane action discussed in [27] in the evaluation of Wilson loops in large representations of the SU (N ) gauge group. We will return to a discussion of this connection shortly. For now, we note the behaviour of A(x) in the small x is given by, Since the lattice sum over (m, n) in (3.3) is convergent, in the large-N limit with τ fixed we may expand the summand first. In particular, to leading order we have In order to understand the behaviour more generally we need to determine fluctuations of the exponent (or the "action") in (3.15) where t ⋆ 1 is given in (3.16), the on-shell action S(t ⋆ 1 ) appears in (3.17), and Upon expanding the exponential of the action in powers of δ and performing gaussian integrals over δ, we obtain the exponentially suppressed terms in the large-N limit. In the 1/N expansion, these terms are given by a sum over new non-holomorphic modular invariant functions D N (s; τ,τ ), where D N (s; τ,τ ) takes the following form , which has to be specified [18]. In other words the coefficients of the terms in the second line of (3.29) are all determined once we input the coefficients with value of m = 0. Furthermore, as was noted in [18], the perturbative coefficients b r,⌊r/2⌋ are determined by the leading N 2 term in the 't Hooft limit. The same is true for the coefficients d r,r that are also determined by the N 2 term in the 't Hooft limit. Such a contribution was determined in [18] using resurgence methods 9 , and it was shown (equation (5.39) of [18]) that d r,r is given by where a r is determined by the following recursion relation, r(r − 4)(r + 2)(2r 2 + 2r − 9)a r + 3(2r 4 − 17r 2 + 9r + 39)a r+1 + 2(r + 2)(2r 2 − 2r − 9)a r+2 = 0 , In conclusion, by summing both the perturbative (3.13) and non-perturbative (3.24) contributions in the 1/N expansion we find that the large-N expansion of the SU (N ) integrated correlator (1.2) has the following structure, This expression has to be understood as the formal yet complete large-N transseries expansion of the integrated correlator C SU(N ) (τ,τ ). The first line of (3.29) is an asymptotic series in the large-N expansion, which was obtained in [5,18]. The second line gives the exponential corrections that are discussed in this paper and encoded in B NP SU(N ) (t). It should be stressed that the apparent ±i ambiguity in (3.29), i.e. the jump in the Stokes constant, has to be understood from a resummation point of view. The first line of (3.29) is a formal asymptotic power series which can be resummed using (3.9), while the seemingly ambiguous non-perturbative terms given by the second line of (3.29) can be resummed using (3.10). The sum of these two resummations produces our unambiguous starting equation (3.8), i.e. Finally, it would be interesting to re-derive the non-perturbative corrections in (3.29) from the large-N expansion of the spectral decomposition of (1.2) discussed in [25]. Holographic interpretation We will now briefly discuss the holographic interpretation of the terms that are exponentially suppressed in the large-N limit. This is the large-N limit in which contributions of Yang-Mills instantons, which are of order e −2πkτ2 , are not suppressed, whereas they are exponentially suppressed in N in the 't Hooft limit. Such contributions arise in the non-zero Fourier modes of the Eisenstein series in the first line of the expression for C SU(N ) (τ,τ ) in (3.29), and are dual to the contributions of D-instantons to terms in the low energy expansion of the holographically dual string theory. Contributions to the integrated correlator in the large-N limit with fixed g 2 YM of the form (3.17) are exponentially suppressed in the large-N limit. The holographic string theory interpretation of such contributions uses the identifications g 2 YM = 4πg s = 4π/τ 2 and g 2 YM N = L 2 /α ′ , where g s is the string coupling constant, α ′ is the square of the string length scale, and L is the scale of the AdS 5 × S 5 space [29,30,31]. Therefore, the large-N expansion of the correlators with fixed g YM translates into the small-α ′ expansion of string amplitudes with fixed g s . The existence of the terms that are exponentially suppressed in N reflects the fact that the α ′ expansion of string amplitudes with fixed string coupling is an asymptotic series. After these replacements the expression (1.13) has the form of a sum over instanton contributions that correspond to ℓ coincident euclidean world-sheets of (p, q)-strings (with gcd(p, q) = 1) wrapped on a two dimensional manifold of volume L 2 . Here we are identifying the tension of a (p, q)-string [32,33] with where gcd(p, q) = 1 and T 1,0 = T F := 1/(2πα ′ ) is the fundamental string tension. Translating to string theory parameters, the exponential terms in (1.13), or equivalently (3.24), become The sums in this expression include a sum over multiple copies (indexed by ℓ) of euclidean (p, q)-string world-sheets. When g s ≪ 1 (i.e. near the cusp τ 2 ≫ 1) the fundamental string world-sheets dominate while other (p, q)-string instantons dominate for other values of g s obtained by the appropriate action of SL(2, Z) on τ . The complete exponentially suppressed contributions are given in (3.29) in a modular invariant form. We have not evaluated the contribution of these instantons explicitly from string theory, but the factor of 4πL 2 in the exponent in (3.32) suggests the contribution of ℓ coincident (p, q)-string euclidean world-sheets wrapping a great two-sphere, S 2 , on the equator of the five-sphere, S 5 . Although it is not obvious how such configurations would be stabilised, it is notable that their contribution to the integrated correlator (3.29) has an overall factor of i, which is characteristic of a negative fluctuation mode (more generally, an odd number of negative modes). Indeed, a two-sphere on the equator of the five-sphere would provide a saddle point that is reminiscent, from a resurgence point of view, of uniton solutions in the principal chiral model [34,35]. The semi-classical origin of such contributions certainly deserves further study. Similarly, it would be interesting to develop a more detailed understanding of the holographic interpretation of the saddle-point action N A( Y mn /4N ) that arose in (3.17) with A(x) defined in (3.18). As pointed out in [26] the same function appeared in the analysis [27] of multiply-wrapped Wilson loops in N = 4 SU (N ) SYM in the 't Hooft limit, which is holographically described in terms of a minimal surface bordering the loop and embedded in AdS 5 . In that case the argument of A(x) was given by x = k √ λ 4N with k being the winding number of a wound Wilson loop. According to [27], such a multiply wound Wilson loop can be effectively described by a four-dimensional embedded euclidean D3-brane carrying electric flux, with an action given by N A(x). In the present context the holographic connection with the contribution of a D3-brane is hinted at by a naive application of the AdS/CFT dictionary, which includes the identification N = L 4 /(4πα ′ 2 g s ) = 2π 2 L 4 T D3 , with T D3 the D3-brane tension. This suggests that when x is a fixed constant, the quantity N A(x) should be identified with the action of a euclidean D3 world-volume wrapped on a four-manifold. We now turn to consider special large-N limits in which the 't Hooft coupling λ = g 2 YM N is chosen as the independent coupling so g 2 YM may depend on N . In this way we will see how the non-perturbative results that were previously obtained by resurgence techniques in [18], [25] and [26] can be viewed as special limits of the SL(2, Z)-invariant expression (3.24). Correspondence with resurgence results We will now consider the large-N expansion in which λ = N g 2 YM = 4πN/τ 2 is an independent parameter in the range 1 ≪ λ ≪ N , which is the familiar strongly coupled 't Hooft limit. In this case, the contributions of Yang-Mills instantons with instanton number k are O(e −2πkN/λ ). In other words, in this regime Yang-Mills instantons, which are present at every order in the 1/N expansion, are exponentially suppressed in N . However, order by order in 1/N , the large λ perturbative expansion is not Borel summable, but can be completed via a resurgence argument. In [18] this argument was shown to give rise to further exponentially suppressed contributions of order O(e −2 √ λ ). Here we will see that this completion follows very simply from the SL(2, Z)-invariant expression obtained in the previous section. For λ in the range 1 ≪ λ ≪ N the dominant contributions to the exponentially suppressed terms are those associated with the m = ℓ, n = 0 terms (the (ℓ, 0) terms) in (3.24). In the holographic interpretation (3.32) these correspond to the contribution of ℓ coincident (1, 0)-string (i.e. fundamental string) world-sheet instantons. Since q = 0 for these contributions they are independent of τ 1 and only receive contributions from the zero mode of the integrated correlator, C In this range of λ, we can rearrange these terms to take the form where the superscript F indicates that here we are considering only F -string (i.e. (1, 0)-string) world-sheet instantons. The functions ∆C (g) (λ) (denoted by ±i∆G (g) (λ)/2 in [18]) contain all the exponentially suppressed large-λ terms of the form where f g (ℓ √ λ) is a perturbative series in 1/ √ λ. The second term in parentheses in (2.13) is the remaining contribution to the zero Fourier mode of the modular function C NP SU(N ) (τ,τ ) appearing in (3.24). This involves the zero-mode term D (0),ii N (s; τ 2 ) defined in (B.13). As emphasised before (2.13) this contribution is obtained by the zero mode of the sum over all values of (m, n) with the exception of terms with n = 0 or, in other words, by the zero mode of the infinite sum over all the multiple copies of (p, q)-strings with q = 0. We will now see that this contribution is proportional to e −8πℓN/ √ λ with ℓ ∈ N and ℓ = 0. These remaining terms can be rewritten as where we have defined the "dual" 't Hooft couplingλ = (4πN ) 2 /λ. The superscript R denotes the sum of the contribution of all terms that remain in the zero mode of the non-perturbative completion (3.24) apart from the (ℓ, 0) sector. The functions ∆C (g) (λ) (denoted ±i∆G (g) (λ)/2 in [26]) contain all the terms that are exponentially suppressed in the "dual" 't Hooft coupling, which have the form e −2ℓ √λ = e −8πℓN/ √ λ with ℓ ∈ N and ℓ = 0. 10 By contrast with the (1, 0)-string case, these remaining non-perturbative terms do not have a simple holographic interpretation since they arise from the zero-mode contribution of the infinite sum over all the multiple copies (labelled by ℓ) of (p, q)-strings with q = 0. The contributions (4.1) to the zero mode were also found in [18,26] from resurgence arguments at large-λ, while the terms (4.3) were found in [26] from similar reasoning at large-λ, i.e. 1 ≪λ ≪ N or equivalently N ≪ λ ≪ N 2 . 11 However, it should be stressed that the non-zero Fourier modes, which depend on τ 1 , cannot be determined easily by resurgence. These contributions are suppressed relative to the (1, 0)-string instanton contribution, 1)). This is the regime in which the on-shell saddle-point action A(x) should not be expanded for x small as in (3.20). Schematically we obtain where the precise details of the expansion coefficients F k (x) are presented in (C.14). A similar analysis applies to the remaining contribution to the zero Fourier mode when λ = O(1) (i.ẽ λ = O(N 2 )). In that case the non-perturbative R contribution can be rearranged to give the expansion where details of the coefficientsF k (x) are presented in (C.23). In these regimes, the two zero-mode contributions to (3.24) expanded as in (4.4) and (4.5) become identical to the expressions in [26], which were obtained from the asymptotic behaviour at large order of the large-N genus expansion in the large-λ or large-λ regimes respectively, i.e. from the large-g behaviour of (4.3) and (4.1) respectively. In [26] the non-perturbative factors e −NA(ℓπ/ √λ ) and e −NA(ℓπ/ √ λ) in (4.4) and (4.5) were called "electric" and "magnetic" D3-brane instanton contributions in analogy with the results for multiply wound electric Wilson loop and magnetic 't Hooft loop derived in [27]. However, although the first zero-mode contribution (4.4) does indeed come from the complete sum overl ℓ coincident (1, 0)-string instantons, the second term arises as the zero-mode contribution from the infinite sum over ℓ coincident dyoonic (p, q)-string instantons with q = 0. 10 The existence of such non-perturbative terms were first predicted in [25]. 11 The exponential factor e −2 √λ coincides with the leading factor contributing to a (0, 1)-string world-sheet instanton when τ 1 = 0, This coincidence may be the reason that these contributions are referred to as "D-string instantons" in [25,26]. Finally, more generally there are contributions from the non-zero modes that are also exponentially suppressed. In particular, the non-zero Fourier modes of D N (s; τ,τ ) are given in (B .13) and (B.19). These are the contributions from (p, q)-string instantons for which the exponential suppression is consistent with S-duality. We will not discuss these explicitly since their analysis involves a straightforward extension of the zero-mode analysis. Integrated correlators with other classical gauge groups The expression for the SU (N ) integrated correlator, C SU(N ) , was extended to correlators for theories with general classical gauge groups G N = U Sp(2N ) and G N = SO(n) (with n = 2N, 2N + 1) in [19]. Recall that S-duality (Montinen-Olive duality) in the SU (N ) case is generalised to Goddard-Nuyts-Olive (GNO) duality [24] for general Lie groups. The correlators considered here are only sensitive to local properties of S-duality and not to global features that involve the centre of the gauge groups and their duals. Such global properties are an essential feature of more general considerations, but here we only need to consider transformations associated with the Lie algebras. These duality transformations correspond to the following interchanges, which relate the expressions for the integrated correlators in [19] to each other, so we need only focus on SO(n) gauge groups (in addition to the SU (N ) case described earlier). Furthermore, recall that GNO duality implies invariance under the action of SL(2, Z) on the complexified coupling τ in the correlators with SU (N ) and SO(2N ) gauge groups, while in the SO(2N + 1) and U Sp(2N ) cases the correlators are invariant under Γ 0 (2), which is a congruence subgroup of SL(2, Z). The corresponding generating functions are defined by where B 1 SO(n) (t) and B 2 SO(n) (t) are related to C SO(n) in (1.2). In this section (and appendix D), we will derive the generating functions B 1 SO (z; t) and B 2 SO (z; t), and study the large-N properties of the integrated correlators. Generating functions for other classical gauge groups We begin by considering the generating function B 2 SO (z; t). Recall, that B 2 SO(2N ) (t) = 0, so we will focus on SO(2N + 1) gauge groups. As shown in [19], B 2 SO(2N +1) (t) can be expressed as where and L j is the Laguerre polynomial. To proceed, we use the contour integral representation of the Laguerre polynomial, where the contour C circles the origin once in a counterclockwise direction. We can then perform the integration over x in (5.3), which leads to and the generating function is given by The relevant poles are located at t 1 = ±z, and the sum of their residues gives the final result As mentioned earlier, B 2 SO(n) (t) vanishes for n = 2N . This is reflected in the above expression for B 2 SO (z; t), which is an odd function of z, and therefore only receives contributions from SO(2N + 1). It will prove important that the singularity structure of B 2 SO (z; t) is rather simple, with just two poles in z located at with equal and opposite residues since B 2 SO (z; t) = −B 2 SO (−z; t). We will now consider B 1 SO (z; t) := ∞ n=1 B 1 SO(n) (t)z n , where B 1 SO(n) (t) is given by [19] and where B SU (z; t) is the generating function of the SU (N ) integrated correlator given in (2.2), and the functions F 1 (z; t) and F 2 (z; t) are given by, and x 3 (5.14) + 211z 4 + 556z 3 + 706z 2 + 556z + 211 x 5 − 4 (7z + 2) x 4 + 2 106z 2 + 125z + 12 x 3 − 2 383z 3 + 797z 2 + 451z + 17 x 2 + 1583z 4 + 4286z 3 + 4446z 2 + 1758z + 23 x − 2 1013z 5 + 2813z 4 + 3752z 3 + 2888z 2 + 1051z + 3 , where x = t(1 − z) + 2(z + 1). Given the above expressions, it is easy to see what the singularity structure of (5.12) is. There are two poles located at as well as the same branch points as for the SU (N ) case, located at It is straightforward to verify that B 2 SO (z; t) satisfies the following homogenous differential equation, while B 1 SO (z; t) obeys an inhomogeneous differential equation given by where B SU (z; t) is the generating function of the SU (N ) correlator given in (2.2). These differential equations imply a Laplace difference equation for the integrated correlator 12 C SO(n) (τ,τ ) (with 'source terms' C SU(n) (τ,τ )) as found in [19] ∆ τ C SO(n) (τ,τ ) − 2c SO(n) C SO(n+2) (τ,τ ) − 2 C SO(n) (τ,τ ) + C SO(n−2) (τ,τ ) − n C SU(n−1) (τ,τ ) + (n − 1) C SU(n) (τ,τ ) = 0 , where the central charge is c SO(n) = n(n − 1)/8, and a similar equation for U Sp(n) (with n = 2N ) with central charge c USp(n) = n(n + 1)/8. The above Laplace difference equation uniquely determines C SO(n) (τ,τ ) and C USp(n) (τ,τ ) for any n in terms of C SU(2) (τ,τ ) [19]. Just as we argued previously in section 2.1 for the SU (N ) case, this ultimately leads to the expression (1.2) for the integrated correlator with SO(n) and U Sp(2N ) gauge groups. 12 Note that for the special case SO (3), the integrated correlator is given by C SO(3) (τ /2,τ /2) rather than C SO(3) (τ,τ ) [19]. Large-N expansion for other classical gauge groups It is well known that N = 4 SYM with SO(n) gauge group is dual to type IIB string theories in an orientifold with background AdS 5 × (S 5 /Z 2 ) [36,37]. In considering the large-n expansion of the integrated correlators with SO(n) gauge group, it was seen in [19] that it is important to consider the expansion in inverse powers of the combinationÑ which is the total RR flux in the holographic dual theory, and we have (L/ℓ s ) 4 = 2g 2 YMÑ (where ℓ s is the string length scale). It is easy to see that the contribution from B 2 SO(n) (t) decays exponentially in the large-Ñ limit [19]. Powers of 1/Ñ arise only from the large-Ñ expansion of B 1 SO(n) (t), and were obtained in [19] (see also [4]). They take exactly the same form as in the large-N expansion of the SU (N ) correlator (3.13) but with N replaced by 2Ñ , where, again, δ r = 0 for even r and δ r = 1 for odd r and the superscript P indicates that these are the perturbative contributions in 1/Ñ . This expression can be derived from the generating function B 1 SO (z; t) following the procedure given in section 3 for the SU (N ) case. The coefficientsb r,m are also determined by the Laplace difference equation (5.19), and some examples of them can be found in [19]; in factb r,⌊r/2⌋ is identical to b r,⌊r/2⌋ of the SU (N ) correlator in (3.29), as emphasised in [4,19]. We will now focus only on terms that decay exponentially and, since the analysis is very similar to that of the SU (N ) correlator, we will only list the results. Repeating the argument leading to (3.3) from (3.1), we proceed by closing the z contour of integration, C, at infinity and collect the various contributions from the singular points of B i SO (z; t) in the complex z-plane. We begin with the contribution from B 2 SO (z; t) given in (5.8), which has two poles in z located at z = ± (t+1) (t−1) as shown in (5.9). In the large-Ñ expansion, this leads to exponentially decaying terms of the form where n = 2N or 2N + 1 and the superscript NP indicates that these are non-perturbative terms inÑ at largeÑ . We see that C 2,NP SO(2N ) (τ,τ ) vanishes as expected. Importantly, the non-holomorphic functions that appear in C 2,NP SO(2N +1) (τ,τ ) have arguments 2τ and 2τ , which means that they are only invariant under the congruence subgroup Γ 0 (2) of SL(2, Z), as expected from GNO duality. Correspondingly, the exponential factor in DÑ (2m − 3r 2 − 3 4 ; 2τ, 2τ) has the form exp(−2 √ 2ℓ Ñ π τ2 |p + 2qτ |) and so only the (p, 2q)-string world-sheet instantons contribute. It is straightforward to show that C 2,NP SO(n) (τ,τ ) obeys the homogenous Laplace difference equation, which can be used to determine the coefficientsd r,m once the leading coefficientsd r,r are given. The non-perturbative terms are captured by the expansions near the other singular points. Let us begin with the pole contributions. From the first pole at z = (t+2) (t−2) we find ; τ,τ , (5.26) where interestingly the coefficientsd r,m are precisely the one we found from C 2,NP SO(n) (2τ, 2τ ) as given in (5.24). From the second pole at z = (1+2t) (1−2t) we obtain ; τ,τ , (5.27) where again the coefficientsd r,m are given by (5.24). Furthermore, just as in the case of C 2,NP SO(n) (τ,τ ), the pole contributions given in (5.26) and (5.27) obey the homogenous Laplace difference equation (5.25). Finally, just as in (3.14), in the case of SU (N ) gauge groups, the discontinuity of the Borel transform (5.12) along z ∈ [1, z 1 ] has a branch-cut of its own along z ∈ (z 1 , ∞) and this contribution takes exactly the same form as for the SU (N ) correlator (3.24) Furthermore the particular coefficients d ′ r,r appearing in the above equation are identical to the coefficients d r,r of C NP SU(N ) (τ,τ ) as given in (3.24)- (3.26). This phenomenon should be compared with the perturbative terms given in the second line of (3.13) for SU (N ) and in (5.22) for SO(n), for which we have an analogous relation between the coefficientsb r,⌊r/2⌋ and b r,⌊r/2⌋ , namelyb r,⌊r/2⌋ = b r,⌊r/2⌋ . Summing (5.26), (5.27) and (5.28), we obtain the complete non-perturbative contributions: The leading large-Ñ non-perturbative contributions to C 1,NP SO(n) come from (5.27), and are schematically of the form with T p,q the (p, q)-string tension defined in (3.31), and where we have generalised the holographic dictionary to the case of SO(n), so that τ 2 = 4π/g 2 YM and 2g 2 YMÑ = L 2 /α ′ . We see that the exponent in (5.31) is half that of the SU (N ) result (3.32). This can be understood by recalling that the SO(2N ) and SO(2N + 1) (and U Sp(2N )) theories are holographic duals of the type IIB string in an orientifold background AdS 5 × (S 5 /Z 2 ) that emerges from the near horizon geometry of N coincident parallel D3-branes coincident with a parallel orientifold 3-plane (O3-plane). Hence, just as the SU (N ) result (3.32) can be understood in terms of ℓ (p, q)-string world-sheets wrapping an equatorial S 2 inside S 5 , (5.31) should correspond to ℓ (p, q)-string world-sheets wrapping a maximal S 2 inside S 5 /Z 2 . Given the explicit expressions (5.23), (5.26) and (5.27), the semi-classical configurations responsible for such non-perturbative corrections should have different semiclassical origins. They may be local minima of the action or saddle points with an odd or even number of negative eigenvalues associated with the one-loop determinants. Starting from the preceding large-Ñ, fixed τ results we can extract the large-Ñ limit with fixed λ SO(n) = 2g 2 YMÑ = 8πÑ /τ 2 . The argument is similar to one we provided for the SU (N ) case in the preceding section and appendix C.1. The result is that the leading exponential terms contributing to the saddle-point approximation to C NP SO(n) = C 1,NP SO(n) +C 2,NP SO(n) are given by the zero-mode contribution D 13 Note that the non-perturbative term C 2,NP SO(n) (τ,τ ) in (5.23) is the only term that is not invariant under SL(2, Z), although and e −2 √λ SO(n) , respectively. As stressed earlier, since these results were obtained starting from the manifestly duality-invariant large-N limit with fixed τ , these different non-perturbative corrections are just facets of the sum over (p, q)-strings when expanded in different corners of the double-scaling limit of the parameter space (Ñ , τ ). A Hermitian matrix model and the integrated correlator We will here review a few basic properties of correlators of the N × N hermitian matrix model and their connections with the integrated correlator [15]. Following [38], 14 the connected m-point correlation functions of the matrix model are defined by, where the integration is over the space of N × N hermitian matrices, and the measure is normalised such that 1 = 1. One may introduce a partition function, in terms of which K i1,...,im (N ) is given by Following [39], it is convenient to introduce a generating functions for K i1,...,im (N ) of the form it is invariant under Γ 0 (2). The fact thatλ SO(n) is related to λ SO(n) by τ 2 → 1/τ 2 , which is a transformation not contained in Γ 0 (2), accounts for the fact that the large-λ SO(n) behaviour and the large-λ SO(n) behaviour of C 2,NP SO(n) (τ,τ ) are different. 14 We are grateful to Matteo Beccaria and Arkady Tseytlin for pointing out this reference. It is known that e N obeys Toda equations [38]. For example, e N (x 1 ) and e N (x 1 , x 2 ) satisfy where the initial (N =1) values are It is useful to introduce a generating function for the N -dependence of e N (x 1 , . . . , x n ) [40,38] that is given by The one-point function e N (x 1 ) was first obtained by Zagier and Harer [39], and the generating function e(x 1 ; z) is also known explicitly, [40,38] e(x 1 ; z) = z (1 − z) 2 exp The generating function for the two-point function e(x 1 , x 2 ; z) is given by an integral representation, The contour is around the poles at u 1 = 0 and u 2 = 0 after expanding e(x 1 , x 2 ; z) as a polynomial in z. We will now discuss the connection between the perturbative part of the integrated correlator and the matrix model correlators. The perturbative contribution to the integrated correlator (1.2) of the SU (N ) theory can be expressed in the form where y = πτ 2 . Importantly, it is known from [15] that I SU(N ) ω 2 y is related to the matrix model two-point and one-point functions introduced above by, 15 where L j i (x) is the generalised Laguerre polynomial. Interestingly, the above expression for I SU(N ) has a simpler form than the expression that was previously determined in [15,16] (see, for example, equation (A.39) in [16]). B Some properties of D N (s; τ,τ ) In this appendix we will study some basic properties of the non-holomorphic modular invariant functions D N (s; τ,τ ), defined in (3.25), which enter into the exponentially suppressed terms that complete the large-N expansion. Recall D N (s; τ,τ ) is defined as 16 It follows from the second line of this equation, (B.2), that the function D N (s; τ,τ ) can be expressed as the Poincaré sum where the seed function is given by It is straightforward to show that D N (s; τ,τ ) obeys the Laplace equation, When N = 0, D N (s; τ,τ ) reduces to the non-holomorphic Eisenstein series E(s; τ,τ ) and the above differential equation reduces to ∆ τ E(s; τ,τ ) − s(s − 1)E(s; τ,τ ) = 0 , (B.8) which is the well-known Laplace eigenvalue equations for the non-holomorphic Eisenstein series. For N > 0, the exponential part plays the rôle of a regulator, which ensures that the lattice sum (B.1) is convergent for all s. We will now consider the Fourier mode decomposition and focus on the zero mode, D N (s; τ 2 ). There are standard methods (see e.g. [44,45]), that allow us to derive the Fourier modes of a Poincaré sum (B.3) in terms of an integral transform of its seed function (B.4), but we will follow a different route here. To obtain the Fourier decomposition of (B.1) we first separate the sum over (m, n) = (0, 0) into two terms: (i) The sum over (m, 0) with m = 0; (ii) The sum over (m, n = 0) for m ∈ Z. In order to consider case (ii), it is useful to eliminate the square root in (B.1) by introducing an integral representation for D N (s; τ,τ ), We can now use standard Poisson resummation to obtain the Fourier series for case (ii), which takes the form dt . (B.12) The zero mode is given by settingm = 0, giving Note that this second contribution can equally well be obtained from the zero-mode contribution of the sum over all the remaining terms (m, n) with m, n ∈ Z and n = 0, i.e. (B.14) As an example we can consider s = 0 and perform the t integral to obtain, Finally, it is straightforward to enlarge the space of non-holomorphic modular functions D N (s; τ,τ ) to the space of modular forms with holomorphic and anti-holomorphic weights (w, w ′ ), by acting on D N (s; τ,τ ) with appropriate covariant derivatives. This is analogous to the construction of non-holomorphic Eisenstein modular forms that entered in the expressions for maximal U (1) Y -violating correlators considered in [22]. In that case the relevant forms had weights (w, −w). We would therefore expect that the large-N expansions in that reference should require a non-perturbative completion by a series of weight (w, −w) modular forms D (w) N (s; τ,τ ). Following [22], a weight (w, −w) modular form D and transforms it into a (w + 1, w ′ − 1) form. C Saddle-point analysis of contributions to the zero mode This appendix presents details of the saddle-point analysis of the contributions to the zero mode, in the 't Hooft limit in which N → ∞ with 1 ≪ λ ≪ N . As remarked in the main text the expression for the zero mode of the generating function, (2.13), consists of two types of terms: (i) the sum over m = ℓ ∈ Z, n = 0, which is holographically dual to ℓ coincident (1, 0)-string world-sheet instantons; (ii) the zero mode of the sum over m ∈ Z and n = ℓ = 0 ∈ Z, i.e. the zero mode of the infinite sum over all the multiple copies (labelled by ℓ) of (p, q)-strings with q = 0. This is equivalent to settingm = 0 and summing over n = ℓ contributions, wherem is the integer conjugate to m in the Poisson summation. We stress that, unlike the (1, 0)-string sector (i), the zero-mode contribution (ii) does not have a simple holographic interpretation although it was called the D-string instanton sector in [25,26]. Since the D-string is usually defined to be the (0, 1)-string and depends on τ 1 in an essential way (3.32), this designation does not seem appropriate. C.1 The (1, 0)-string world-sheet instanton contribution We now turn to details of the exponentially suppressed behaviour of the first non-constant term in the zeromode integral (2.13), which corresponds to the contribution of the (1, 0)-string (or F -string) world-sheet instanton. In order to consider the large-N contributions we need to consider the saddle-point contribution to the contour integral where the superscript F denotes the contribution of the F -string instantons. We will see that the large-N expansion of this contribution at large 't Hooft coupling λ, produces a genus expansion N 2−2g , with g ≥ 0, of exponentially suppressed corrections at large λ. These were identified in [18] by applying resurgence analysis to the asymptotic large-λ perturbation expansion of the genus expansion. The large-N , large-λ expansion of C NP,F SU(N ) (τ,τ ) is controlled by a saddle point which, in the regime 1 ≪ λ ≪ N , is located at and with an exponentiated action given by where we have substituted the saddle-point action N A(x) that was defined in (3.18). Note that this is precisely the exponential of the on-shell action ( In order to consider the fluctuations around the saddle point we write t = t ⋆ 1 + N λ 3/4 δ and expand the saddle-point action (C.4) in powers of δ, and using the expression (2.2) for B SU (z; t) into (C.2) and the integration over δ leads to the final expression order by order in the 1/N expansion. At leading order, we obtain This is identical to the expression for ±i∆G (0) (λ)/2 found in equation (5.39) in [18] and the coefficients a r , with r ≥ 0, are precisely related to the leading coefficients d r,r = −2 −2(r+1) a r in (3.24). According to the holographic correspondence such a term corresponds to a contribution to tree-level string theory associated with F -string world-sheet instantons, and the higher order terms in 1/ √ λ correspond to the α ′ -expansion in string theory. It is straightforward to extend the above analysis and determine the next term in the 1/N expansion, which is a term of order N 0 . Following the same logic as above we arrive at which again agrees with the resurgence result ±i∆G (1) (λ)/2 found in [18]. 17 We therefore see that the N 2 and N 0 terms in the large-N expansion of the zero mode of C SU(N ) (τ,τ ) that are non-perturbative in λ are consistent with a topological expansion of the form with ∆C (g) (λ) (denoted by ±i∆G (g) (λ)/2 in [18]) containing the exponentially suppressed large-λ terms of the form is a perturbative series in 1/ √ λ. Combining these terms with the perturbative large-λ expansion obtained from (3.13) one obtains the transseries expansion where all the non-perturbative contributions ∆C (g) (λ) can be found from a resurgent analysis argument applied to the large-λ expansion of C (g) (λ) as discussed in [18], or equivalently using (C.7). Equation (C.9) ignores corrections that are exponentially suppressed in N at large N , which will be discussed shortly. For fixed λ, the large-N expansion of correlators corresponds to the genus expansion of string amplitudes. Therefore the exact expression A |ℓ| for the on-shell action (C.4) can be interpreted as the result of resumming the genus expansion around the minimal surface. As discussed earlier, exactly the same function appears in the study of Wilson loops [27], and once again higher order terms in A(x) can be thought as genus expansions around the minimal surface formed by the Wilson loop. Furthermore, in the case of Wilson loops, the parameter x is proportional to the electric charge k of the Wilson loop, which may be tuned to scale with N . Therefore in the region where x is not small, higher-order contributions to the expansion of A(x) become important. In this case the string world-sheet thickens and an alternative description of A(x) in terms of euclidean D3-brane instantons is more appropriate [27]. 18 This transition between the small-x and finite-x descriptions is illuminated by expressing x in terms of the fundamental string (or F -string) and D3-brane tensions, The fluctuations around the saddle point are obtained by writing t = t ⋆ 1 + N − 1 2 δ and expanding the saddlepoint action (C.4) in powers of δ Upon expanding both the effective action and the integrand of (C.4) at large-N , or equivalently small δ, and performing gaussian integrals over δ, we arrive at what can be called the "electric" D3-brane expansion: (C.14) The expressions (C.13)-(C.14) coincide with the results of [26] (where x was called y). In particular the coefficients h k (x), which are polynomials of order 4k in x, were presented in [26] for k ≤ 3. Higher-order polynomials can be determined straightforwardly from the saddle-point expansion. For example, the k = 4 term is given by 128 . (C.15) Higher order corrections, h k≥5 (x), can easily be computed from our saddle-point expansion. We stress that in the impressive analysis of [26], the equations (C.13) and (C.14), together with the expressions for h k≤3 (x), were obtained from the asymptotic behaviour at large genus of the large-N genus expansion in the large-λ regime. From our discussion, it is now manifest that the electric D3-instanton reduces to the world-sheet instanton in the 't Hooft limit. C.2 The remaining zero-mode contributions The second non-constant contribution to the zero mode in (2.13) is given by where the superscript R denotes the remaining terms in the non-perturbative zero-mode contribution once the (m, 0) terms have been subtracted out. As we have seen, this corresponds to the summand in (3.2) after performing a Poisson summation that replaces m bym and then setting (m, n) = (0, ℓ). The leading factor in the saddle-point analysis of this contribution is obtained by noting that the saddlepoint solution t * 1 in (3.16) now takes the form The exponentiated saddle-point action is given by where, following [25,26], we have introduced the parameterλ = (4πN ) 2 /λ = 4πN τ 2 and consider the regime in which 1 ≪λ ≪ N . Higher order corrections can be obtained straightforwardly resulting in the expansion The functions ∆C (g) (λ) are analogous to (C.5) and (C.6) and contain all the exponentially suppressed terms in the "dual" 't Hooft coupling of the form e −2ℓ √λ = e −8πℓN/ √ λ with ℓ ∈ N and ℓ = 0. In particular, these results precisely agree with the function ∆C (g) (λ) obtained in [26] (and denoted by ±i∆G These expressions are similar to the "electric" results (C.11)-(C.12) upon exchangingx → x. However, we see that (C. 16 which is a different expansion from the "electric" case (C.13)-(C.14). The expressions (C.22)-(C.23) again coincide with the results of the [26] (modulo renamingx by x andG by G (mag) ). 19 The coefficients g k (x) are polynomials of order 4k inx that were determined in [26] for k ≤ 3. Higher-order terms can again be determined straightforwardly. For example, the k = 4 term is given by As in the "electric" case, in [26] these expressions were determined by analysis of the asymptotic behaviour of the large-N genus expansion at high genus in the large-λ regime and they reduce to (C. 19) in the regime N ≪ λ ≪ N 2 , i.e. the "dual" 't Hooft regime 1 ≪λ ≪ N . To conclude this section, we emphasise that the two distinct non-perturbative terms, (C.4) and (C.18), are the two parts (B.10) and (B.13) of the zero Fourier mode of the SL(2, Z) invariant function D N (s; τ,τ ). Therefore, the sum (3.24) contains all the non-perturbative terms obtained by resurgence at large-λ and large-λ in [17,18,25,26], and from resurgence at large N in [26]. We see that despite the fact that the manifest S-duality of (3.24) is obscured in considering the different large-N 't Hooft limits of the F -string (C.4)-(C.13) and of the zero mode of the sum over all remaining (p, q)-strings with q = 0 (C.18)-(C.22), these results contain remnants of the relations implied by SL(2, Z). D Generating Functions for general classical groups This appendix presents some of the details used in deriving the generating functions for the integrated correlators for theories with general classical groups given in section 5. The methods used in the SU (N ) case in appendix A and section 2.1 do not generalise to general classical groups in an obvious manner so we will use a method that applies to all cases. In order to evaluate the generating functions it is necessary to reduce the double sums in (5.11) to single sums. This can be achieved by using a representation of Laguerre polynomials in terms of creation and annihilation operators (as in [46]), where a, a † = 1, We begin by inserting the projector (1 ± (−1) i )/2 in (5.11), which leads to This leads to an expression with one less summation index to be summed Focussing now on the term with the alternating sign (−1) i , we can consider the x derivative of this double sum, The sum over i is now straightforward and leads to the expression, Similarly, we have where we have used the recurrence relation for Laguerre polynomials L α n = L α+1 n − L α+1 n−1 , and completed the square.
15,965.6
2022-10-25T00:00:00.000
[ "Mathematics" ]
Characterising Clifford parallelisms among Clifford-like parallelisms We recall the notions of Clifford and Clifford-like parallelisms in a 3-dimensional projective double space. In a previous paper the authors proved that the linear part of the full automorphism group of a Clifford parallelism is the same for all Clifford-like parallelisms which can be associated to it. In this paper, instead, we study the action of such group on parallel classes thus achieving our main results on characterisation of the Clifford parallelisms among Clifford-like ones. Introduction It is a widely used strategy in mathematics to define a new structure by modifying a given one.The definition of a Clifford-like parallelism from [7] and [17], which is recalled in Section 2, follows these lines.The starting point is a projective double space (P, , r ), that is, a projective space P together with a left parallelism and a right parallelism r on its line set such that the so-called double space axiom (DS) is satisfied.The given parallelisms and r are called the Clifford parallelisms of (P, , r ) in analogy to the classical example arising from the three-dimensional elliptic space over the real numbers.The parallel classes of and r are then used to define parallelisms that are Clifford-like w.r.t.(P, , r ).Among them are the initially given parallelisms and r .We restrict ourselves most of the time to the case when P is three-dimensional, and we make use of an algebraic description of such a double space in terms of an appropriate fourdimensional algebra H over a commutative field F. Thereby we adopt the notation P(H F ), , r and we have to distinguish two cases, (A) and (B).In case (A), H is a quaternion skew field with centre F, the left and right parallelisms do not coincide and, in general, there are Clifford-like parallelisms of P(H F ), , r different from and r .In case (B), H is a commutative extension field of F satisfying some extra property, and = r is the only Clifford-like parallelism of P(H F ), , r .We include case (B) for the sake of completeness and in order to obtain a unified exposition that covers both cases, even though several of our results are trivial in case (B). In Section 3 we study automorphisms of a Clifford-like parallelism of a projective double space P(H F ), , r being motivated by the following result: if a projective collineation of P(H F ) preserves at least one Clifford-like parallelism of P(H F ), , r , then all its Clifford-like parallelisms are preserved. 1This follows from [18,Thm. 3.5] in case (A) and holds trivially in case (B).In our algebraic setting these projective collineations are induced by F-linear transformations of H which are described in Subsection 3.1, where we determine all F-semilinear automorphisms of the right parallelism.In preparation for Section 4, we exhibit for a quaternion skew field H the orbits of certain points and lines of P(H F ) under the group of inner automorphisms of H and we determine all r -classes that are fixed under a left translation of H. The main results are stated in Section 4. In Theorem 4.1, we consider a threedimensional projective space P that is made into a double space in two ways.If there exists a parallelism on P that is Clifford-like w.r.t.both double space structures then the given double spaces coincide up to a change of the attributes "left" and "right" in one of them.This finding improves [17,Thm. 4.15] (see Corollary 4.2) and it simplifies matters considerably.Indeed, when dealing with a Clifford-like parallelism, there is only one corresponding double space structure in the background.In Theorems 4.3, 4.5 and 4.6 we characterise the Clifford parallelisms among the Clifford-like parallelism of P(H F ), , r via the existence of automorphisms with specific properties.For example, Theorem 4.3 establishes that a Clifford-like parallelism of P(H F ), , r is Clifford precisely when it admits an automorphism that fixes all its parallel classes and acts non-trivially on the point set of the projective space P(H F ). Next, let us emphasise that some of our investigations are in continuity with classical results on dilatations in kinematic spaces.For example, in our proof of Theorem 4.3 we could use the fact that the existence of a proper non-trivial dilatation (namely a non-identical collineation with a fixed point and the property that all parallel classes remain invariant) is possible only in the commutative case, i.e. in our case (B) (see [35,Teorema 2] or [27, (II.10)]).We decided instead to include a short direct proof in order to keep the paper self-contained.There are also neat connections to the theory of Sperner spaces and (generalised) translation structures; we refer the interested reader to [1], [38], [39] and the many references given there. Finally, another remark seems appropriate.Any Clifford-like parallelism on the three-dimensional real projective space is Clifford (see Remark 3.6).The Clifford parallelisms on this space are the only topological parallelisms that admit an automorphism group of dimension at least 4; see [33] and the intimately related articles [3], [5], [31], [32].In contrast to our considerations, in this beautiful result only the "size" of an automorphism group is taken into account and not its action on the parallel classes. Preliminaries on Clifford and Clifford-like parallelisms A parallelism on a projective space P is an equivalence relation on the set L of lines such that each point of P is incident with precisely one line from each equivalence class.(If P is a finite projective space then a parallelism is also called a packing or a resolution.)For each line M ∈ L we write S(M) for the parallel class of M, that is, the equivalence class containing M. This notation arises quite naturally, since any parallel class is in fact a spread (of lines) of P. When considering several parallelisms, we distinguish among the above notions and symbols by adding appropriate attributes, subscripts or superscripts.We refer to [2], [20, Ch. 17], [22], [23] and [24, § 14] for a wealth of results about parallelisms and further references. Let P and P be projective spaces with parallelisms and , respectively and let κ be a collineation of P to P such that, for all lines M, N ∈ L, M N implies κ(M) κ(N).Then κ takes any -class to a -class by [18,Lemma 2.1].Such a κ is frequently called an isomorphism2 of (P, ) to (P , ). Suppose that a projective space P is endowed with two (not necessarily distinct) parallelisms, a left parallelism and a right parallelism r .Following [25], (P, , r ) constitutes a projective double space if the following axiom is satisfied. (DS) For all triangles p 0 , p 1 , p 2 in P there exists a common point of the lines M 1 and M 2 that are defined as follows.M 1 is the line through p 2 that is left parallel to the join of p 0 and p 1 , M 2 is the line through p 1 that is right parallel to the join of p 0 and p 2 . Given a projective double space (P, , r ) each of and r is referred to as a Clifford parallelism3 of (P, , r ).More generally, a Clifford-like parallelism of (P, , r ) is defined as a parallelism on P such that, for all M, N ∈ L, M N implies M N or M r N (see [17,Def. 3.2]).Each parallel class of a Clifford-like parallelism of (P, , r ) is a left or a right parallel class: see [17,Thm. 3.1], where this topic appears in the wider context of "blends" of parallelisms.A Clifford-like parallelism of (P, , r ) is said to be proper if it does not coincide with one of and r .In what follows, whenever we say that a parallelism on a projective space P is Clifford (respectively Clifford-like) it is intended that P can be made into a double space (P, , r ) such that is one of its Clifford (respectively Clifford-like) parallelisms. An algebraic description-up to isomorphism-of all projective double spaces (P, , r ) that contain at least two distinct lines and satisfy the so-called "prism axiom" was given in [25].It is based on quaternion skew fields and purely inseparable commutative field extensions of characteristic two.According to [26,Satz 1] and [28,Satz 2], the prism axiom appearing in [25] is redundant; see also the surveys in [24, § 14] and [22, pp. 112-115].This is why we omit to consider this axiom here.From now on we exhibit exclusively three-dimensional projective double spaces. 4We therefore recall only their algebraic description in the next few paragraphs. We adopt the following settings throughout this article: F denotes a commutative field and H is an F-algebra with unit 1 H satisfying one of the following conditions. ( The projective space on the vector space H F , in symbols P(H F ), is understood to be the set of all subspaces of H F with incidence being symmetrised inclusion.We adopt the usual geometric terms: Points, lines, and planes of P(H F ) are the subspaces of H F with vector dimension one, two, and three, respectively; the set of all lines is written as L(H F ).The following notions rely on H F being an Falgebra.In P(H F ), lines M and N are defined to be left parallel, M N, if λ c (M) = N for some c ∈ H * .Similarly, M and N are said to be right parallel, M r N, if ρ c (M) = N for some c ∈ H * .Then P(H F ), , r is a projective double space.The parallelisms and r are distinct in case (A) and identical in case (B). Remark 2.1.The left and right parallelism w.r.t.(H, +, •) are the same as the right and left parallelism defined by the opposite field of H. So, from a geometric point of view, the choice of the attributes "left" and "right" is immaterial. The multiplication on the field (H, +, •) may be altered without changing the associated projective double space P(H F ), , r .Let us choose any e ∈ H * .Then we can define a multiplication • e on H via x • e y := x • e −1 • y for all x, y ∈ H.This makes (H, +, • e ) into an F-algebra, which will briefly be written as H e .The left translation λ e (w.r.t.H) is an F-linear isomorphism of H to H e , whence the arbitrarily chosen element e ∈ H * turns out to be the unit element of H e .The projective double spaces arising from the F-algebras H and H e are the same, since λ h = λ e h•e and ρ h = ρ e e•h for all h ∈ H * .Let us briefly sketch a more conceptual verification of our second observation.The point Fe and the parallelisms and r can be used to make the point set P(H F ) into a two-sided incidence group with unit element Fe [25, §3].(The prism axiom appearing in [25] can be avoided [26, Satz 1], [28,Satz 2].)Then, using the group structure on P(H F ), the F-vector space H can be endowed with a multiplication making it into a field with unit element e (see [9,Satz 1] and [45,Hauptsatz]).This field, which coincides with our H e , therefore provides an alternative description of the projective double space P(H F ), , r . Let A(H F ) ⊂ L(H F ) denote the star of lines with centre F1.By (1), each line L ∈ A(H F ) is readily seen to be a maximal commutative subfield of H and hence an F-subalgebra.Next, we recall an explicit construction that gives all Clifford-like parallelisms of P(H F ), , r .Upon choosing any H * -invariant subset F ⊆ A(H F ), one obtains a partition of L(H F ) by taking the left parallel classes of all lines in F and the right parallel classes of all lines in A(H F ) \ F. This partition determines an equivalence relation, which turns out to be a Clifford-like parallelism of P(H F ), , r .See [17,Thm. 4.10] for a proof in the case when (A) applies; in case (B) the result is trivial due to = = r . Remark 2.3.Let be any parallelism on P(H F ) and let S(M), M ∈ L(H F ), be one of its parallel classes.We recall that the kernel of the spread S(M) consists of all endomorphisms ϕ of the abelian group (H, +) such that ϕ(N) ⊆ N for all N ∈ S(M).This kernel, which will be denoted by K H, S(M) , is a field; see, for example, [34,Thm. 1.6].Consequently, if ϕ ∈ K H, S(M) and ϕ 0, then ϕ(N) = N for all N ∈ S(M).The following simple reasoning will repeatedly be used.If Proposition 2.4.If Clifford parallelisms and on a three-dimensional projective space have two distinct parallel classes in common, then these parallelisms coincide. Proof.By virtue of the algebraic description of all projective double spaces and by Remark 2.1, we may assume the following.The parallelism is the right parallelism r coming from an F-algebra (H, +, •) subject to (A) or (B).There is a multiplication • : H × H → H making the F-vector space H F into an F-algebra (H, +, • ) subject to (A) or (B) such that coincides with the right parallelism r arising from (H, +, • ).These algebras share a common unit element 1 ∈ H * , say. Remark 2.5.Note that the above theorem may alternatively be established by using the one-to-one correspondence between Clifford parallelisms and external planes to the Klein quadric (see [16,Cor. 4.5]). Automorphisms, their orbits and actions This section is devoted to deepen the study of the automorphisms of the Clifford parallelisms of a three-dimensional projective double space P(H F ), , r as described in Section 2. In particular we obtain a description of the orbits of certain points and lines under the action of the group H * , and we characterise the right parallel classes fixed (as a set) by a given left translation.In order to avoid trivialities, we shall repeatedly confine ourselves to case (A).These findings will lead us in Section 4 to the proof of our main results. Automorphisms In this subsection H always denotes an F-algebra subject to (A) or (B).Given any parallelism on P(H F ), we are going to use from now on the phrase automorphism of for any β in the general semilinear group ΓL(H F ) that acts as a -preserving collineation on P(H F ).The symbol Γ denotes the automorphism group of .This terminology is in accordance with the one in [18].h The Clifford parallelisms of the projective double space P(H F ), , r give rise to automorphism groups Γ =: Γ and Γ r =: Γ r .These groups coincide, that is, In case (A), a proof can be derived from [36, p. 166 Lemma 3.1.Let S r (M) be the right parallel class of a line M ∈ L(H F ).The elements of the kernel K H, S r (M) are precisely the mappings λ g with g ranging in the line that contains the point F1 and is right parallel to M. Consequently, A similar result holds with the role of "left" and "right" interchanged. Any line L ∈ A(H F ) is a commutative quadratic extension field of F contained in H.The above Lemma illustrates the rather obvious result that the restriction to L of the representation λ (respectively ρ) provides an isomorphism of the field L onto the kernel of the right (respectively left) parallel class of the line L.This proves anew that all left and right parallel classes are regular spreads (see [7, 4.8 Cor.], [15,Prop. 3.5] or [16,Prop. 4.3]).Maybe less obvious is the following conclusion.Any semilinear transformation ϕ ∈ ΓL(H F ) that fixes all lines of one right (respectively left) parallel class is a left (respectively right) translation and therefore in the automorphism group Γ = Γ r . In the next proposition we describe the automorphism group Γ = Γ r .Alternative proofs, which cover only the case when H is a quaternion skew field, can be retrieved from [6,Sect. 4 where Aut(H/F) denotes the group of all automorphisms of the field H that fix F as a set. Proof.A direct verification shows that the group Aut(H/F) is a subgroup of Γ r . As we noted at the beginning of this subsection, the same applies for the group λ(H * ).For all γ ∈ Γ r and all lines M ∈ L(H F ), we have Using ( 4), this implies that λ(H * ) is a normal subgroup of Γ r . Let us choose any β ∈ Γ r .We define we proceed as follows.There is a line L with 1, z ∈ L. Applying (6) to γ := ϕ and together with Remark 2.3 establishes (7).For all x, y ∈ H, we have so that ϕ is an automorphism of the field H. Furthermore, ϕ(1) = 1 together with ϕ being F-semilinear implies ϕ(F) = F. Take notice that F is the centre of the quaternion skew field H in case (A) and so under these circumstances Aut(H/F) = Aut(H). Suppose that ϕ ∈ Aut(H/F) is F-linear or, equivalently, that ϕ fixes F elementwise.Then ϕ ∈ H * is an inner automorphism of the field H.In case (A), this follows from the theorem of Skolem-Noether [21, Thm.4.9].In case (B), any inner automorphism of H is trivial and ϕ = id H , since any h ∈ H * \ F * is a double zero of the polynomial h 2 + t 2 ∈ F[t], which is the minimal polynomial of h over F. So, by (5), the group of all F-linear automorphisms of r can be written in the form Let be any Clifford-like parallelism of P(H F ), , r .The group appearing in (8) coincides with the group Γ ∩ GL(H F ) comprising all F-linear automorphisms of (see [18,Thm. 3.5]).The problem to determine the full automorphism group Γ without extra assumptions on H, F or seems to be open.Partial solutions can be found [18,Sect. 3].The examples in [18,Sect. 4] show the existence of proper Clifford-like parallelisms satisfying Γ = Γ = Γ r and also of proper Clifford-like parallelisms satisfying Γ ⊂ Γ = Γ r . Orbits under the group of inner automorphisms In this subsection H denotes an F-algebra subject to (A), that is, a quaternion skew field with centre F. The following outcomes fail in case (B), since there the group of inner automorphisms is trivial. Recall that, given any h ∈ H, the trace and the norm of h are the elements of F defined, respectively, by tr(h) = h + h and N(h) = hh = hh, where h denotes the conjugate of h.The conjugation is an antiautomorphism of H of order 2 that fixes F elementwise.The identity h 2 − tr(h)h + N(h) = 0 holds for any h ∈ H.The norm N is a multiplicative quadratic form and its associated symmetric bilinear form is The form • , • is non-degenerate and so the mapping sending each subspace X of H F to its orthogonal subspace X ⊥ is a polarity of P(H F ).The next result is briefly mentioned in [7, Rem.4.5] and [30, p. 76, Ex. 10] (Char F 2 only).For the sake of completeness, a proof will be presented below.Lemma 3.3.Given quaternions q 1 , q 2 ∈ H there exists an inner automorphism of H taking q 1 to q 2 if, and only if, tr(q 1 ) = tr(q 2 ) and N(q 1 ) = N(q 2 ). Proof.From tr(q 1 ) = tr(q 2 ) and N(q 1 ) = N(q 2 ), the quaternions q 1 , q 2 are zeros of the polynomial m(t) = t 2 − tr(q 1 )t + N(q 1 ) ∈ F[t].If m(t) is reducible over F, then m(t) has no zeros in H outside F. Thus q 1 ∈ F and m(t) = (t − q 1 ) 2 .Now m(q 2 ) = 0 yields q 2 = q 1 , whence the identity id H is a solution.On the other hand, if m(t) is irreducible over F, then id F can be extended in a unique way to an isomorphism γ of the commutative field F1 ⊕ Fq 1 ⊂ H onto the commutative field F1 ⊕ Fq 2 ⊂ H such that γ(q 1 ) = q 2 ; see, for example, [8,Prop. 7.2.2].By the theorem of Skolem-Noether [21, Thm.4.9], this γ extends to an inner automorphism of H. The proof of the converse is straightforward. The above result describes the orbits under the action of the inner automorphism group H * on quaternions. 7By considering the vector space H F as an affine space, the orbit of any q ∈ H is the intersection of the affine quadric {x ∈ H | N(x) = N(q)} with the hyperplane {x ∈ H | tr(x) = tr(q)}.Here, however, we aim at providing a description of the orbits of the points of P(H F ) under the action of H * .Since the behaviour of the points of the plane (F1) ⊥ = {x ∈ H | tr(x) = 0} is different from that of any other point, these points will be excluded in the next proposition. Proposition 3.4.Let H be a quaternion skew field with centre F and let Fq, q ∈ H * , be a point of P(H F ) such that tr(q) 0. Then the following hold. (a) The orbit of Fq under the action of the group H * of inner automorphisms of H is a quadric of P(H F ), say O q , which is given by the quadratic form (b) If q ∈ F * , then O q consists of a single point. (c) If q ∈ H * \ F * , then O q is an elliptic quadric, no line through F1 is tangent to O q , and the polar form of ω q is non-degenerate. (b) The quadric O q , q ∈ F * , is the H * -orbit of F1, whence it consists of this single point only. (c) The point F1 is not in the H * -orbit of Fq and so F1 is off the quadric O q .From q + q = tr(q) ∈ F * and ω q (q) = 0, the line joining Fq and F1 meets O q residually at Fq Fq and so it is not tangent to O q .Also, the point Fq is a regular point of O q .By the transitive action of the group H * on the points of O q , the same applies to all other points of O q .The quadric O q cannot be ruled, because it does not contain any point of the plane {x ∈ H | tr(x) = 0}. If Char F 2 then the polar form of ω q is non-degenerate, since otherwise O q would contain a singular point.In the case of Char F = 2 the form ω q is nondegenerate, because it merely is a non-zero scalar multiple of the non-degenerate alternating bilinear form • , • from (9).Proposition 3.5.Let H be a quaternion skew field with centre F and, in P(H F ), let L be a line that passes through the point F1 and is not contained in the plane (F1) ⊥ .Every plane through an arbitrary line in the H * -orbit of L contains infinitely many lines of this orbit. Proof.By virtue of the action of H * on H * (L), it is enough to show the assertion for an arbitrary plane E passing through L. On the line L, we can pick one point, say Fq, other than F1 such that tr(q) 0. By Proposition 3.4, the orbit of Fq is an elliptic quadric O q .Furthermore, the line L is a bisecant of this quadric that meets O q at Fq and Fq Fq.The plane E contains the bisecant L of O q and so E cannot be a tangent plane of O q .This implies that E intersects O q along a regular conic.As F is infinite, so is this conic.By joining each of the points of the conic with F1 we get infinitely many lines through F1 in the plane E. All of them are in H * (L). Remark 3.6.The orbit of any line L ∈ A(H F ) under the group H * is infinite [10,Thm. 3].This result was improved in [46] by showing that any such orbit has cardinality |F|.Limited to the case of quaternion skew fields and lines of A(H F ) that are not in (F1) ⊥ , the last proposition enriches this result with a geometric insight. From [17,Thm. 4.12], the group H * acts transitively on A(H F ) if, and only if, F is a formally real pythagorean field and H is an "ordinary" quaternion skew field with centre F. Precisely under these circumstances, P(H F ), , r admits no proper Clifford-like parallelisms. Parallel classes fixed by automorphisms First, let P(H F ), , r be a projective double space as specified in Section 2. Suppose that a left translation λ g , g ∈ H * , acts as a non-identical collineation on P(H F ). Hence g ∈ H * \ F * .Any line M ∈ L(H F ) is left parallel to its image λ g (M) and so λ g fixes all left parallel classes.As we saw in Lemma 3.1, S r (F1 ⊕ Fg) is the only right parallel class that is fixed linewise under λ g .If λ g fixes also all lines of a left parallel class, then Lemma 3.1 forces λ g to be a right translation as well, that is, g has to be in the centre of H.In case (A) this gives a contradiction.In case (B), H is a commutative field and so this condition imposes no restriction on g; due to = r , the given λ g fixes precisely one left parallel class linewise, namely S (F1 ⊕ Fg). For the rest of this subsection we confine ourselves to the case (A). Proposition 3.7.Let H be a quaternion skew field with centre F and let g ∈ H * \ F * .In P(H F ), , r , a right parallel class is invariant under the left translation λ g precisely when it is of the form S r (M), where M is a line satisfying at least one of the following conditions: Proof.(a) Suppose that (10) holds.From Lemma 3.1, all lines of the right parallel class S r (M) are fixed under λ g .(b) Suppose that a line M satisfies (11).The line M ⊥ is left parallel and right parallel to M (see [17,Cor. 4.4]) and it is contained in (F1) ⊥ .The line gM is also left parallel to M. As M ⊥ and gM are incident with the plane (F1) ⊥ , they share a common point and so they must coincide.Taking into account that λ g ∈ Γ r and M ⊥ r M we obtain λ g S r (M) = S r (gM) = S r (M ⊥ ) = S r (M), as required.(c) Conversely, any λ g -invariant right parallel class can be written as S r (M) with F1 ⊆ M. Then λ g (M) M r λ g (M).Again from [17,Cor. 4.4], there are only two possibilities.First, λ g (M) = gM = M, which implies Fg ⊆ M and establishes (10).Second, Remark 3.8.Figures 1 and 2 depict the possible cases in Proposition 3.7 under the assumption Char F 2 and Char F = 2, respectively.In all cases, there are distinct points F1 and Fg as well as distinct planes (F1) ⊥ and g −1 (F1) ⊥ .Furthermore, (F1 The pictures on the left-hand side show the situation when F1 g −1 (F1) ⊥ or, in other words, when Fg (F1) ⊥ , which in turn is equivalent to tr(g) 0.Here there are no lines M subject to (11).The pictures on the right-hand side show the opposite situation.Here the set of all lines M that satisfy (11) comprises a pencil of lines.In detail, the circumstances are as follows. Main results The definition of a Clifford-like parallelism in [17,Def. 3.2] is essentially based on a given projective double space (P, , r ).We are thus led to the problem of whether or not distinct projective double spaces can share a Clifford-like parallelism. Theorem 4.1.Let P(H F ), , r be a projective double space, where H is an Falgebra subject to (A) or (B).Furthermore, let and r be parallelisms such that P(H F ), , r is also a projective double space.Suppose that a parallelism of P(H F ) is Clifford-like with respect to both double space structures.Then, possibly up to a change of the attributes "left" and "right" in one of these double spaces, = and r = r . Proof.First, we consider case (A).We take any line of the star A(H F ).We noted in Remark 3.6 that the orbit of this line under the group H * of all inner automorphisms of H is infinite.Thus there are three mutually distinct lines, say L 1 , L 2 and L 3 , in this orbit.From [17,Thm. 4.10], the -classes of these lines are of the same kind w.r.t.P(H F ), , r , i.e., we have either Next, we turn to case (B).There exist three mutually distinct lines L 1 , L 2 , L 3 ∈ A(H F ). Their -classes are of the same kind w.r.t.P(H F ), , r due to = r = . In both cases, the parallel classes S(L n ), n ∈ {1, 2, 3}, are mutually distinct.Consequently, among them there are at least two distinct classes of the same kind w.r.t. the double space (P(H F ), , r ).Up to a change of notation, we may assume S(L n ) = S r (L n ) = S r (L n ) for n ∈ {1, 2}.Now Proposition 2.4 shows that the Clifford parallelisms r and r coincide.This in turn forces = , since the left parallelism is uniquely determined by the right one (see [24, pp. 75-76] or [19, §6]). Corollary 4.2.Any Clifford-like parallelism of P(H F ), , r other than and r is not Clifford. Proof.Assume to the contrary that =: is Clifford.Then there is a parallelism, say r , such that P(H F ), , r is a projective double space.Applying Theorem 4.1 gives therefore = or = r , a contradiction. The above corollary, when restricted to case (A), is just a reformulation of [17,Thm. 4.15].Therefore, the rather technical proof in [17], which relies on H being a quaternion skew field, can now be avoided. Our final results provide the announced characterisations of Clifford parallelisms among Clifford-like parallelisms. Theorem 4.3.Let be a Clifford-like parallelism of P(H F ), , r , where H is an F-algebra subject to (A) or (B).Then the following assertions are equivalent.From now on we deal with case (A) only.We select one line N 1 through F1 that is not in (F1) ⊥ .We assume w.l.o.g. that the parallel class S(N 1 ) is a left parallel class.(Otherwise, we have to interchange the attributes "left" and "right" in what follows.)Let g := β(1).We consider the left translation λ g and the product We choose one N ∈ H * (N 1 ).Then the parallel class S(N) is a left parallel class.Thus Formula (13) and α(1) = 1 ∈ N together force α(N) = N.By Proposition 3.5, every plane through N contains at least two lines from the orbit H * (N 1 ), and so any such plane is fixed under α.The lines and planes through F1 are the "points" and "lines" of a projective plane; "incidence" is given by symmetrised inclusion.Our α acts on this projective plane as a collineation.By the above, all "lines" through the "point" N are fixed under α, and so N serves as a "centre" of this collineation. But N may vary in the orbit H * (N 1 ), which comprises more than one line by the theorem of Cartan-Brauer-Hua [29, (13.17)].Consequently, this collineation has more than one "centre", that is, α fixes all lines of the star A(H F ).We now consider the action of α on the projective space P(H F ). Since all lines of A(H F ) are fixed, α acts as a perspective collineation with centre F1.This implies that α is F-linear.Since α and λ −1 g are F-linear, so is β.[18,Thm. 3.5]) and λ −1 g ∈ Γ ∩ GL(H F ) follows α ∈ Γ ∩ GL(H F ). Now pick any line L ∈ L(H F ).The left parallel line to L through F1 is fixed under α ∈ Γ , whence we have L α(L).On the other hand, L is incident with at least one plane through F1.This plane is α-invariant.Therefore the left parallel lines L and α(L) are coplanar, which in turn implies L = α(L).So we arrive at α = c id H for some c ∈ F * .Now, using α(1) = 1, we end up with α = id H . Case (ii).Let F1 ⊆ g −1 (F1) ⊥ and M 1 (F1) ⊥ .We choose any plane E other than g −1 (F1) ⊥ through the line M 1 .Let M E denote the set of all lines that are incident with E and belong to H * (M 1 ).By Proposition 3.5, the set M E is infinite.From Proposition 3.7 and (14), any line M ∈ M E has to satisfy (10) or (11), that is M = F1 ⊕ Fg or M = g −1 (F1) ⊥ ∩ E. This implies |M E | ≤ 2, an absurdity. Remark 4.4.Note that, as a consequence of the previous theorem, the group of automorphisms that preserve all parallel classes with respect to a given Cliffordlike parallelism of P(H F ), , r is contained in GL(H F ). Moreover this group is the group of left translations (or right translations respectively) precisely when = r (respectively = ).If, on the other hand, is a proper Clifford-like parallelism, then this group is the group of all λ g with g ∈ F * , thus, from the projective point of view, it comprises only the identity map.Theorem 4.5.Let be a Clifford-like parallelism of P(H F ), , r , where H is an F-algebra subject to (A) or (B).Then the following assertions are equivalent. (a) The parallelism is Clifford and r . (b) The parallelism admits an automorphism β ∈ Γ that stabilises a single parallel class of and, furthermore, fixes all lines of this particular parallel class. (b) ⇒ (a).The only β-invariant parallel class can be written in the form S(L) with L ∈ A(H F ). Let us assume that S(L) is a right parallel class.Since all lines of S r (L) are fixed under β, we obtain β ∈ K H, S r (L) * = λ(L * ) from Lemma 3.1.Consequently, all left parallel classes are stabilised under β, whence none of them is a parallel class of .This shows = r . Theorem 4.6.Let be a Clifford-like parallelism of P(H F ), , r , where H is an F-algebra subject to (A) or (B).Then the following assertions are equivalent.(c) The parallelism admits an automorphism β ∈ Γ that stabilises all its parallel classes, fixes at least one of its parallel classes linewise, and acts as a non-identical collineation on the projective space P(H F ). ( a ) The parallelism is Clifford.(b)The parallelism admits an automorphism β ∈ Γ that stabilises all its parallel classes and acts as a non-identical collineation on the projective space P(H F ).Proof.(a) ⇒ (b).There exists a g ∈ H * \ F * .Corollary 4.2 shows that = or = r .In the first case the left translation λ g has the required properties, in the second case the same applies to the right translation ρ g .(b)⇒ (a) In case (B), = r implies that = is Clifford. ( a ) The parallelism is Clifford and = r .(b) If an automorphism β ∈ Γ fixes all lines of at least one parallel class of , then all parallel classes of are stabilised under β . A) H is a quaternion skew field with centre F1 H .(B) H is a commutative field with degree [H : F1 H ] = 4 and such that h 2 ∈ F1 H for all h ∈ H.All F-linear endomorphisms of H F constitute the F-algebra End(H F ).The left regular representation λ : H → End(H F ) sends each h ∈ H to the mapping λ(h) =: λ h given as λ h (x) := hx for all x ∈ H.The image λ(H) is an isomorphic copy of the field H within End(H F ).The elements of the multiplicative group 5 λ(H * ) = GL(H H ) are the left translations.Similarly, the right regular representation ρ : H → End(H F ) sends each h ∈ H to ρ(h) =: ρ h given as ρ h (x) := xh for all x ∈ H.In this way we obtain ρ(H) as an antiisomorphic copy of H within End(H F ) and the group of right translations 6 ρ(H * ) = GL( H H). For all g, h ∈ H, the mappings λ g and ρ h commute.The multiplicative group H * admits the representation ( ) :H * → GL(H F ) sending each h ∈ H * to h := λ −1 h • ρ h , whichis an inner automorphism of the field H. Clearly, in case (B) the group H * comprises only the identity id H .
8,960
2020-06-09T00:00:00.000
[ "Mathematics" ]
IT INFRASTRUCTURE MANAGEMENT: OPTIMIZING PERFORMANCE, SCALABILITY, AND RELIABILITY : This abstract examines the essential elements of IT infrastructure management, focusing on increased performance, scalability, and reliability. In today’s dynamic business environment, organizations are increasingly reliant on their IT systems for productivity and efficiency. Maintaining these systems requires a strategic approach while adjusting scalability and maintaining reliability. This abstract explores the key elements of infrastructure efficiency, including proactive inspections, robust security measures, easy integration of new technologies, and efficient resource allocation It emphasizes the importance of flexibility in order to adapt to the evolving patterns of business needs It can lay the foundation for the flexible, adaptable and efficient technology ecosystems needed to enable modern businesses it has been successful. system, to increase efficiency, scalability and reliability.Achieving these goals requires a holistic approach that includes several points: 1. Performance Improvement: This involves fine-tuning infrastructure components to increase their performance and responsiveness.This includes monitoring system performance, identifying bottlenecks, and using techniques such as load balancing, caching, and optimization rules to provide overall system speed and agility to the sky.2. Increase Scalability: Scalability is essential to accommodate growth and changes in demand.Design the layout so that it can be effortlessly scaled vertically (increasing capacity in existing components) or horizontally (by adding more features and expanding) If you use flexible architectures hear when cloud computing, containerization, and virtualization are used, it allows flexible distribution of resources based on needs.3. Reliability Assurance: Maintaining a high level of reliability ensures that IT services are available and perform consistently.This includes implementing robust backup and disaster recovery solutions, implementing redundant systems, and adopting error-free policies to reduce downtime during failure or work even stopped unexpectedly 4. Proactive Monitoring and Management: The use of advanced monitoring tools and techniques to identify issues, monitor potential failures, and respond quickly to anomalies is essential. I. Important points: Performance Monitoring and Optimization: Performance management and optimization involves the process of evaluating, analyzing, and increasing the efficiency and effectiveness of systems, processes, or services It includes continuously monitoring key metrics such as speed; reliability, resources used to identify potential bottlenecks or areas for improvement Tools and techniques for monitoring Real-time or Monitor historical data, enabling analysis of performance problems and their root causes.Optimization focuses on fixing existing features to increase performance.These activities include optimizing configuration, optimizing rules, allocating resources, or using more efficient algorithms to increase system performance.It strives to streamline operations, reduce response times and enhance the overall user experience.Regular reviews coupled with targeted optimization strategies ensure systems are performing efficiently, adapting to evolving requirements and meeting business expectations or exceeds while maintaining maximum operational efficiency. Scalability Planning and Implementation: Scalability strategies require the development of policies or procedures that can accommodate growth and increased demand without sacrificing performance.This includes assessing current capabilities, discussing future needs, and developing strategies to effectively manage expansion.Implementation includes adopting adaptable architectures, technologies, and processes that align with projected growth.This includes horizontal scaling (adding more of the same features) or vertical scaling (increasing existing features) as needed.Key implementation steps include load balancing, efficient resource allocation, modular architecture, and the use of scalable services such as cloud services.It is important to evaluate scalability through simulations or stress tests to ensure that the systems can handle the increased load.Ongoing assessment, feedback, and adjustments shape the scalability measure over time.Successful scalability design and implementation enables systems to scale, adapt, and respond efficiently to evolving demands, ensuring that they remain efficient and effective as projects or applications grow. Reliability Enhancement through Redundancy and Backup: Enhancing reliability through redundancy and backup includes the strategic use of new components or systems to ensure continuous operation and reduce the risk of failure in critical applications Redundancy refers to resources the importance of duplication in a system, allowing for smooth operation even if one component fails.This redundancy can be achieved in a variety of ways, such as hardware duplication, data backup, or the use of backup power sources.Having backups or duplicates in place reduces the chances of a complete system failure, thus increasing overall reliability.Additionally, backups act as a safety net, enabling faster recovery and mitigating failures.This approach is essential in industries where lean operations are essential, such as aviation, healthcare, telecommunications and information technology, to ensure continued operation despite the possibility of disruption or failure. II. Security Measures for Data Protection: Security measures for data protection include procedures to protect sensitive information from unauthorized access, breach or misuse.Encryption plays an important role by encoding the data, making it unreadable without a valid decryption key.Manages permissions from access points, and limits data access to authorized individuals or entities.Permanent data backups ensure data recovery in the event of a system failure or cyberattack.Robust firewalls and intrusion detection systems strengthen network security and prevent unauthorized access or malicious behavior. Using multifactor authentication adds an extra layer of security by requiring multiple credentials for access.Continuous monitoring and regular security audits help identify vulnerabilities and mitigate risks quickly.Employee training and awareness programs on safety management systems affect a safety-oriented culture in organizations.Complying with data protection regulations such as GDPR, HIPAA, or CCPA is essential for compliance and maintaining data integrity and privacy.Overall, a comprehensive technical, policy and human resources approach is needed to effectively strengthen data protection. III. Automation and Orchestration for Efficiency: Automation uses technology to perform routine tasks or processes without human intervention, increasing efficiency, reducing manual labor and streamlining operations by using software or systems to perform predetermined tasks increase accuracy, precision and speed of completion.Orchestration involves planning and managing automated tasks or business processes across multiple systems or platforms.It automatically integrates various controls, ensuring that they work together in harmony to achieve a common goal.Through these systems, organizations optimize resource utilization, reduce errors, and increase overall productivity.automation and orchestration work together to improve productivity by eliminating manual processes, reducing human error, speeding up task completion, and allowing for more efficient allocation of resources This integration enables businesses to streamline operations, drive workflows high, and focus human efforts on tasks that require creative, problem-solving and critical thinking. IV. Regular Maintenance and Updates: Routine maintenance and upgrade refers to the ongoing process of ensuring the proper operation, safety, and efficiency of software, systems, or equipment In this process of manufacturing, testing, modification, and improvement performed periodically to fix potential issues, improve performance, and install the latest features or security patches Emerging software technologies include routine maintenance debugging, resolving software conflicts, and updating applications to latest version or frameworks.Lubrication, cleaning and part replacement for machinery and equipment to prevent wear and extend longevity.Regular maintenance and updates reduce vulnerability, reduce the risk of system failure, and improve operational efficiency, ultimately removing time, resources, and expensive downtime leave This is an important strategy for maintaining reliability, security, and optimal performance across environments. Fig( Fig(i):IT infrastructure management
1,576.4
2024-03-01T00:00:00.000
[ "Computer Science", "Business" ]
The Human RecQ Helicases, BLM and RECQ1, Display Distinct DNA Substrate Specificities* RecQ helicases maintain chromosome stability by resolving a number of highly specific DNA structures that would otherwise impede the correct transmission of genetic information. Previous studies have shown that two human RecQ helicases, BLM and WRN, have very similar substrate specificities and preferentially unwind noncanonical DNA structures, such as synthetic Holliday junctions and G-quadruplex DNA. Here, we extend this analysis of BLM to include new substrates and have compared the substrate specificity of BLM with that of another human RecQ helicase, RECQ1. Our findings show that RECQ1 has a distinct substrate specificity compared with BLM. In particular, RECQ1 cannot unwind G-quadruplexes or RNA-DNA hybrid structures, even in the presence of the single-stranded binding protein, human replication protein A, that stimulates its DNA helicase activity. Moreover, RECQ1 cannot substitute for BLM in the regression of a model replication fork and is very inefficient in displacing plasmid D-loops lacking a 3′-tail. Conversely, RECQ1, but not BLM, is able to resolve immobile Holliday junction structures lacking an homologous core, even in the absence of human replication protein A. Mutagenesis studies show that the N-terminal region (residues 1–56) of RECQ1 is necessary both for protein oligomerization and for this Holliday junction disruption activity. These results suggest that the N-terminal domain or the higher order oligomer formation promoted by the N terminus is essential for the ability of RECQ1 to disrupt Holliday junctions. Collectively, our findings highlight several differences between the substrate specificities of RECQ1 and BLM (and by inference WRN) and suggest that these enzymes play nonoverlapping functions in cells. RecQ helicases are a ubiquitous family of DNA strand-separating enzymes that defend the genome against instability. They derive their name from the prototypical member of the family discovered in Escherichia coli (1). Since this discovery, many other RecQ helicases have been found in various organisms ranging from prokaryotes to mammals (2)(3)(4)(5). Five members of the RecQ family have been found in human cells: BLM, RECQ1 (also known as RECQL or RECQL1), RECQ4, RECQ5, and WRN (4,6). Mutations in the genes encoding RECQ4, BLM, and WRN are responsible for distinct genetic disorders, named Rothmund-Thomson syndrome, Bloom's syndrome, and Werner's syndrome, respectively (7)(8)(9). Although these disorders are all associated with inherent genomic instability and cancer predisposition, they show distinct clinical features, suggesting that these three enzymes are involved in different DNA metabolic pathways. For example, Werner's syndrome patients show typical premature aging features, such as premature graying and thinning of hair, osteoporosis and cataracts, whereas Bloom's syndrome is unique among cancer predisposition syndromes in that Bloom's syndrome patients are predisposed to the development of most types of cancers. Rothmund-Thomson syndrome individuals are characterized by skin and skeletal abnormalities as well as an increase in cancer incidence, predominantly osteosarcoma. Mutations in the RECQ1 and RECQ5 genes may be responsible for additional cancer predisposition disorders that are distinct from Rothmund-Thomson syndrome, Bloom's syndrome, and Werner's syndrome, but this remains to be proven. In this regard, interesting candidates are patients with a phenotype similar to that of Rothmund-Thomson syndrome individuals who do not carry any mutations in the RECQ4 gene. Moreover, recent studies have linked a single nucleotide polymorphism present in the RECQ1 gene to a reduced survival in pancreatic cancer patients (10,11). RecQ helicases are ATP-and Mg 2ϩ -dependent enzymes that unwind DNA with a 3Ј to 5Ј polarity. Some RecQ helicases are also able to promote the annealing of complementary DNA duplexes in an ATP-independent fashion (12)(13)(14)(15)(16). Additionally, WRN possesses a 3Ј to 5Ј exonuclease activity that distinguishes it from the other human RecQ enzymes (17)(18)(19). A property common to all RecQ helicases is their ability to unwind DNA structures other than conventional B-form DNA duplexes. In particular, several RecQ enzymes, including BLM and WRN, preferentially unwind G-quadruplex DNA and synthetic X-junctions that model the Holliday junction recombination intermediate (20 -26). Although some differences in substrate specificity between the human, lower eukaryotic, and prokaryotic RecQ enzymes have been described, little is known about the differences that may distinguish the activity of the five human enzymes (27). A comparative analysis of the substrate specificity of human RecQ helicases can provide valuable insights into the molecular basis of the different cellular functions of these enzymes. A previous study showed that BLM and WRN have similar activity toward a panel of model DNA substrates of different structure and length, suggesting that, at least for these two enzymes, differences in helicase substrate specificity are not a fundamental distinguishing feature for defining their specific role in cellular DNA metabolism (23). In this study, we compared the substrate specificity of RECQ1 and BLM using a number of substrates of different structure and length, including substrates that have not been analyzed previously for either RECQ1 or BLM. Our findings highlight several differences between the enzymatic properties of RECQ1 and BLM and suggest that the role of RECQ1 in the maintenance of genome stability is distinct from that of BLM. The possible functions of these RecQ helicases in human cells are discussed. EXPERIMENTAL PROCEDURES Proteins-Recombinant His 6 -tagged RECQ1 and BLM were expressed and purified following previously described procedures (28,29). The RECQ1-(57-649) cDNA was PCR-amplified with a forward primer containing an NheI site at the 5Ј-end and a reverse primer containing a XhoI site at the 5Ј-end. RECQ1-(57-649) cDNA was then cloned into NheI and XhoI sites of a pET-28a(ϩ) vector (Novagen), and the insert was excised using XbaI and XhoI in order to add the His 6 tag at the N terminus. The cDNA with the additional sequence for the His 6 tag was subcloned into the pFastBac1 vector using XbaI and XhoI. To obtain mutant RECQ1-(1-579), a termination codon after residue 579 was created by QuikChange XL sitedirected mutagenesis kit (Stratagene). For the experiments with the untagged RECQ1, the His 6 tag sequence was removed by digestion with thrombin (1:500 ratio) for 3 h at 4°C in a buffer of 20 mM Tris-HCl, pH 7.4, 150 mM KCl, 5 mM ␤-mercaptoethanol. The sample was then incubated with the TALON metal affinity resin (Clontech) for 2 h at 4°C. The flow-through containing the untagged RECQ1 was collected and concentrated using a Vivaspin filter (Vivascience). The experiments with untagged BLM were performed with a truncated variant of the protein, BLM-(642-1290), expressed and purified from E. coli following previously described procedures (30). DNA Substrates-All of the oligonucleotides used in this study are listed in Table 1. For each substrate, a single oligonucleotide was 5Ј-end-labeled with [␥-32 P]ATP using T4 polynucleotide kinase. The kinase reaction was performed in PNK buffer (70 mM Tris-HCl, pH 7.6, 10 mM MgCl 2 , 5 mM dithiothreitol) at 37°C for 1 h. [␥ 32 P]ATP-labeled oligonucleotides were then annealed to a 1.4-fold excess of the unlabeled complementary strands in annealing buffer (10 mM Tris-HCl, pH 7.5, 50 mM NaCl) by heating at 95°C for 6 min and then cooling slowly to room temperature. The purification of the forked duplex substrates was performed using Micro Bio-Spin columns (Bio-Rad) or ProbeQuant G-50 Micro columns (Amersham Biosciences). The G4 and G2Ј DNA substrates carrying the consensus repeat from the murine immunoglobulin S␥2b switch region or the Oxytricha telomeric repeat sequence were prepared as described previously (24,25,31). The Holliday Substrate Specificity of RecQ Helicases junction substrates were purified using Sepharose-4B columns (Amersham Biosciences). The plasmid-based D-loop substrates were generated by RecA-mediated strand invasion of oligonucleotides DL3, DLm, and DL5 into pUC18 and were purified as described previously (32,33). For the RNA-DNA heteroduplex, the radiolabeled rDL30m RNA oligonucleotide was annealed to pUCbottom in a 1:1.2 molar ratio, as described (33) and was used without purification. For the oligonucleotide-based R-loop, the RNA-DNA heteroduplex was further annealed to pUCtop in a 1:1.2 molar ratio and used without purification. The plasmid-based fork regression substrate was prepared as described previously by Ralf et al. (34). Briefly, the pG68 and pG46 plasmids that contain an array of BbvCI restriction endonuclease recognition sites were nicked by N.BbvC IA and N.BbvC IB, respectively. The two plasmids were annealed to create a paranemic joint, which was converted to a plectonemic joint by DNA topoisomerase I treatment to create the RF substrate molecule. The RF1 molecule was radiolabeled on the 3Ј end using [␣-32 P]TTP and Klenow enzyme. DNA Helicase Assays-Helicase assays were performed in 20 l of a reaction mixture containing buffer A (20 mM Tris-HCl, pH 7.5, 8 mM dithiothreitol, 5 mM MgCl 2 , 10 mM KCl, 10% glycerol, 80 g/ml bovine serum albumin), 5 mM ATP, and 32 P-labeled helicase substrate (0.5 nM). These buffer and substrate concentrations were used in all unwinding assays, except where stated. For the G-quadruplex substrates, reactions were performed in buffer B (50 mM Tris-HCl, pH 7.4, 5 mM MgCl 2 , 50 mM NaCl, 100 g/ml bovine serum albumin) using 5 mM ATP and 30 nM 32 P-labeled helicase substrate, as described previously (25). The experiments with the plasmid-based D-loops, the RNA-DNA hybrids, and the R-loops were carried out in a 10-l reaction volume in buffer C (33 mM Tris acetate (pH 7.8), 1 mM MgCl 2 , 66 mM sodium acetate, 0.1 mg/ml bovine serum albumin, 1 mM dithiothreitol, 1 mM ATP). When RNA-based molecules were used as substrates, RNase inhibitor (New England Biolabs) was added to the reaction to a final concentration of 2 units/l. Recombinant RecQ helicase protein (RECQ1 or BLM) was added to a concentration indicated in the figures, and the mixture was incubated at 37°C for the times specified in the figure legends. The reaction was terminated by the addition of 20 l of 0.4 M EDTA pH 8.0, 10% glycerol (quench solution). Reaction products were resolved using 10% native PAGE, and the extent of unwinding was quantified as described previously (28). Fork regression assays were carried out as described previously (34). ATPase and Electrophoretic Mobility Shift Assays-The ATPase and DNA binding assays were performed using procedures described previously (28). The rate of ATP hydrolysis was measured using thin layer chromatography assays. Reactions included different DNA probes at the concentrations indicated in the figure legends and 20 nM RECQ1 or BLM in buffer A. The electrophoretic mobility shift assays were performed in buffer B with various protein concentrations using 30 nM G-quadruplex DNA. RESULTS AND DISCUSSION The unusual ability of BLM and WRN to unwind cruciform and G-quadruplex DNA structures provides useful insights into the possible biological function(s) of these enzymes (23). The present study extends these observations, providing the first comparative analysis of the substrate specificity of RECQ1 and BLM (and by extension WRN). A series of oligonucleotide-and plasmid-based DNA substrates with different structures and lengths was generated to compare the substrate specificities of RECQ1 and BLM ( Table 2). Initial experiments using forked duplex substrates with an ssDNA 3 tail of 30 or 8 nt confirmed that both enzymes were catalytically active and were able to unwind fork-like structures with an ssDNA tail of Ն8 nt (supplemental Fig. 1) (data not shown). The similar specific activity of RECQ1 and BLM toward the forked duplex substrate was used to standardize each preparation of enzyme for the unwinding experiments using other DNA substrates. Moreover, all experiments were repeated using at least two independent preparations of each enzyme and under identical reaction conditions to minimize any possible interexperimental variation. An untagged version of RECQ1 was also used to exclude any possible contribution of the His 6 tag in determining the substrate specificity of RECQ1 (supplemental Fig. 2). The results were consistent with our previous finding that His 6 -tagged and untagged RECQ1 have identical ATPase, unwinding, and strand annealing activities (35). Regarding BLM, previous studies have already demonstrated that a bacterially expressed (not His 6 -tagged) truncated variant of BLM, BLM-(642-1290), has the same substrate specificity as the full-length His 6 -tagged BLM isolated from yeast cells toward various linear duplexes, fork substrates, and Holliday junction structures (30). This result was also confirmed with some of the substrates used in this work (supplemental Fig. 2). G-quadruplex DNA; a Putative Role of RecQ Helicases in the Removal of G4 DNA Structures from Ribosomal Gene Clusters, Immunoglobulin Heavy Chain Switch Regions, or Telomeric Repeats-Although G-DNA structures have not been unequivocably observed in vivo, the ease of formation of G-DNA in vitro suggests that G4 DNA structures probably exist at least transiently in cells (31). In particular, G-rich regions are abundant in ribosomal DNA gene clusters, in the immunoglobulin heavy chain switch regions, and within telomeric repeats. Although the complementary strand of a duplex would normally protect the G-rich strand from interstrand G-G pairing, cellular processes that promote DNA duplex unwinding, such as replication, transcription, or recombination, generate transient single-stranded DNA stretches that would allow G-quadruplex formation. Previous studies indicated that BLM and WRN could efficiently unwind G-quadruplex DNA substrates (21,23,25,26,36). Our experiments using different concentrations of BLM confirm this finding (Fig. 1A). Surprisingly, given that BLM, WRN, Sgs1, and E. coli RecQ can all unwind G-quadruplex substrates (23-25, 34, 36), RECQ1 is not able to unwind this G4 structure even at RECQ1 concentrations of up to 500 nM or in the presence of a saturating concentration of the single-stranded binding protein, human replication protein A (hRPA) (Fig. 1, A and B). The same set of experiments was repeated using a G4 DNA substrate with a 3Ј-tail of 20 nt, with very similar negative results, indicating that the length of the tail does not affect the ability of RECQ1 to unwind this kind of DNA structure (Fig. 1, C and D). Again, BLM is able to disrupt this structure (Fig. 1, C and D). We also prepared a G2Ј DNA substrate, containing the Oxytricha telomeric repeat sequence and a tail of 7 nt at the 3Ј-end, to test if RECQ1 could unwind this kind of G-quadruplex formed by two antiparallel hairpin dimers (24). Our data indicate that RECQ1 is unable to unwind G2Ј DNA substrates (data not shown). In contrast, BLM can also efficiently unwind this substrate even at a protein concentration 200-fold lower To investigate the reason for the different unwinding activity of these two human RecQ helicases toward G4 DNA, we compared their ability to catalyze ATP hydrolysis in the presence of G-quadruplexes as cofactors. The calculated kinetic constant (k cat ) values for the ATP hydrolysis reactions catalyzed by BLM using G4 or ssDNA probes as DNA co-factors are 260 Ϯ 10 and 330 Ϯ 10 min Ϫ1 , respectively ( Fig. 2A). The equivalent values for RECQ1 are 25 Ϯ 2 and 91 Ϯ 2 min Ϫ1 , respectively. These results suggest that the inability of RECQ1 to unwind this particular kind of DNA structure might be, at least partially, related to its poor ability to hydrolyze ATP in the presence of G-quadruplexes. Our gel mobility shift experiments showed, however, that RECQ1 efficiently binds G4 substrates with either a 7-or 20-nt 3Ј-tail with an affinity similar to that measured for the BLM protein (Fig. 2B). These results suggest that binding of RECQ1 to G4 DNA may not trigger the same conformational change necessary for the stimulation of the ATPase activity that is likely to take place with the BLM helicase. Collectively, these findings point to a clear difference in substrate specificity between RECQ1 and all other members of the RecQ helicase family tested thus far, in that RECQ1 is unable to resolve G-quadruplex DNA structures. The fact that G-quadruplexes are generally the preferred substrate of BLM, WRN, Sgs1, and E. coli RecQ indicates that G4 DNA may be a natural target for these helicases in vivo. In this regard, the formation of G-quadruplex structures may contribute to the genomic instability and hyperrecombination phenotypes characteristic of BLM-deficient cells. Our observation that RECQ1 does not unwind these structures indicates that this RecQ enzyme cannot substitute for BLM or WRN in the removal of G-quadruplex DNA structures and is likely to play a distinct function in cells. RNA-DNA Hybrids; Potential Resolution of Transcription or Replication Intermediates-RNA-DNA hybrids are typical intermediates in the process of transcription and in the initiation of DNA replication. Increasing evidence suggests that a particular kind of RNA-DNA structure known as an R-loop can form as the result of transcription in cells harboring mutations in certain genes or when transcription results in long purinerich stretches of RNA (37)(38)(39)(40)(41)(42). The presence of these R-loops ahead of a translocating replication fork may induce the arrest of DNA synthesis if these structures are not removed. Recent studies have shown that replicative helicases from different organisms, such as the E. coli DnaB, Methanothermobacter thermautotrophicus MCM, and the Schizosaccharomyces pombe Mcm4,6,7 complex unwind DNA-RNA hybrids by translocating along the ssDNA strand, suggesting that R-loops can be removed by these helicases to prevent DNA replication arrest (43). Interestingly, other studies have shown that R-loops may play a role during normal cell growth and may be critical for the maintenance of genome integrity (44 -47). Moreover, following blockade of replication, R-loops may be required for the reassembly of a functional replication complex; the RNA at an R-loop would be extended by a DNA polymerase, opening up the duplex and leading to the recruitment of the replicative complex, thus allowing replication to restart while the original block is repaired or bypassed (45). To our knowledge, the ability of human RecQ helicases to unwind hybrid substrates containing DNA and RNA has not been investigated to date. Our data indicate that the BLM helicase could be involved in the removal of R-loops in vivo, since it can efficiently unwind a RNA-DNA hybrid substrate in which there are 43-nt 3Ј and 5Ј DNA extensions (Fig. 3A). However, as with the G4 DNA substrates, RECQ1 cannot substitute for BLM in this function, since it is unable to unwind RNA-DNA hybrids even in the presence of saturating concentrations of hRPA (Fig. 3, B and C). To study this second striking difference in substrate specificity between BLM and RECQ1 in more detail, we analyzed an oligonucleotide-based R-loop substrate in which an RNA oligonucleotide represented the "invading" strand. Previous studies have indicated that RecQ helicases could unwind such structures where all of the strands were composed of DNA (a D-loop) (16, 32, 48 -50). Our previous analysis of D-loop unwinding by BLM indicated that the unwinding reaction occurred in two stages, where the "top strand" in Fig. 3 (which is not base-paired in the region of the "invading strand") is first displaced, and then the remaining partial duplex is unwound, releasing the free, end-labeled "invading" strand (32). The data in Fig. 4A indicate that BLM can both remove the top strand and subsequently unwind the RNA-DNA hybrid to release a free RNA strand. In contrast, RECQ1 shows only a very low level of activity on this R-loop substrate (Fig. 4B). However, in the presence of 30 nM RPA, RECQ1 is able to efficiently disrupt the R-loop, although in this case, the reaction product is exclusively the RNA-DNA hybrid, representing only the removal of the top DNA strand. These data further confirm that RECQ1 is unable to unwind RNA-DNA hybrid molecules and highlight a second important difference in substrate specificity between RECQ1 and BLM, providing a strong indication that RECQ1 is not involved in the removal of R-loops in vivo. Plasmid-based D-loops; a Role in Resolution of Unproductive Recombination Intermediates?-Another substrate that was recently shown to be among the preferred substrates of BLM is the plasmid-based D-loop (32). Previous studies have shown that several RecQ helicases, including RECQ1, are able to unwind oligonucleotide-based D-loops made of three partially complementary oligonucleotides (16, 48 -50). These "static" D-loops are unlikely, however, to mimic D-loop structures generated during homologous recombination reactions in vivo, since they are not capable of branch migration and carry structural features, such as two free double-stranded DNA ends, that are not present in mobile D-loops in vivo. Our data indicate that, although BLM can efficiently displace plasmid-based D-loops with or without a protruding 3Ј or 5Ј ssDNA tail, RECQ1 can only effectively displace a plasmid D-loop with a 5Ј invading terminus and a 3Ј ssDNA tail (Fig. 5, A-C). Interestingly, our titration experiments at increasing protein concentrations indicate that RECQ1 is ϳ10-fold more active than BLM in displacing D-loops with a protruding 3Ј-end. Bachrati et al. (32) demonstrated that BLM resolves the "mobile" plasmid-based D-loop in a single step and suggested that this occurs through branch migration of the three-stranded junction, as would be predicted for a physiologically relevant mechanism of expulsion of the invading ssDNA strand. A possible explanation for the fact that RECQ1 cannot efficiently displace plasmid D-loops without a 3Ј-tail is that RECQ1 cannot branch-migrate. Hence, only the D-loop with a 3Ј-tail can be unwound by RECQ1, because the protein can engage on the 3Ј terminus and unwind the substrate in a "canonical" helicase reaction via denaturation of the duplex portion of the substrate. This 3Ј terminus would not exist in vivo, because the invading strand in the recombination reaction would be far longer than we can mimic in vitro. D-loops with a 5Ј invading strand may, however, be present in vivo, because the RAD51 protein can promote strand invasion from a 5Ј ssDNA end as well as a 3Ј ssDNA end (51). Such a structure represents an unproductive recombination intermediate, since DNA polymerases need a 3Ј-OH terminus on the invading strand to initiate DNA synthesis. Thus, D-loops with 5Ј invading strand must somehow be prevented by an as yet unknown mechanism. Our data suggest that BLM could be involved in this process, providing an additional barrier to the formation of unproductive D-loops with a 5Ј invading strand (32). RECQ1 would be unlikely to participate in such a reaction in vivo due to the lack of a 3Ј-end from which to initiate unwinding. Model Replication Forks; a Putative Role in the Rescue of Stalled Replication Forks-Further indirect evidence in support of our hypothesis that RECQ1 cannot branch-migrate comes from the observation that RECQ1 does not promote the regression of a model replication fork in vitro (Fig. 6). Previous studies demonstrated that BLM uses its branch migration activity to mediate fork regression, generating regressed arms greater than 250 bp in length via the formation of a so-called "chicken foot" structure (34). This reaction is also catalyzed by some other RecQ helicases, such as WRN, but not by E. coli RecQ, and probably utilizes all of the known activities of RecQ helicases: strand separation, branch migration, and DNA strand annealing (52). The ability of BLM and WRN to promote fork regression suggests that BLM and WRN, unlike RECQ1 and E. coli RecQ, are involved in the rescue of stalled replication forks in vivo as part of a genome maintenance pathway. If replication forks stall as a result of encountering lesions on the leading strand template, the helicase-mediated fork regression reaction could facilitate template switching, allowing the leading strand to be extended past the lesion and replication to restart after the reversal of the regressed fork (34). Holliday Junctions; the Processing of Homologous Recombination Intermediates-BLM and WRN can interact with Holliday junctions (HJ) and promote branch migration (20,22). The ability of these two helicases to branch migrate HJ structures suggests that BLM and WRN might suppress hyperrecombination between sister chromatids and homologous chromosomes by disrupting potentially recombinogenic molecules that might arise at sites of stalled replication forks. Moreover, BLM is able to cooperate with topoisomerase III␣ and RMI1 to promote dissolution of double Holliday junctions (53). To test whether RECQ1 could catalyze Holliday junction disruption, we prepared a 50-bp-long synthetic HJ substrate (4-X12) that contains a 12-bp homologous core, following a previously described procedure (23,54). Consistent with previous findings (23), recombinant BLM disrupts the X-junction into a splayed arm product and, to a lesser extent, into an ssDNA product using enzyme concentrations between 0.25 and 2 nM (Fig. 7A). At high BLM concentrations, the percentage of splayed arm product increases relative to that of the ssDNA product. A possible explanation for this phenomenon is that higher BLM concentrations are able to promote some reannealing of the ssDNA products, as reported previously (12). Analysis of RECQ1 protein indicates that it is less active than BLM toward the HJ substrate, since only a limited amount of either the splayed arm or the ssDNA product is observed at RECQ1 concentrations below 5 nM (Fig. 7B). These results were also confirmed by the use of kinetic experiments, which show that, although more than 60% of the HJ substrate is unwound within 10 min using 2 nM BLM, less than 50% of the X-junction is resolved into splayed arm or ssDNA products by 2 nM RECQ1 even after 60 min (Fig. 7, C and D). Interestingly, when we repeated the unwinding experiments using a 60-bp X-junction without a homologous core (4-X0), we found that BLM is very inefficient at disrupting this structure, whereas RECQ1 can unwind this 4-X0 substrate with an efficiency similar to that of the 4-X12 substrate (Fig. 8A). Kinetic experiments indicate that RECQ1 resolves this immobile HJ structure without formation of a splayed armed product intermediate, since the amount of splayed armed product detected during the course of the reaction is always very limited and much less than that seen in the experiments with the 4-X12 substrate (Fig. 8B). The BLM protein is only able to resolve these immobile HJ structures if hRPA is added to the reaction mix (Fig. 8C). Similar results were obtained with WRN. 4 These findings suggest that RECQ1 disrupts HJ irrespective of the presence or absence of homology, possibly by using a mechanism that does not require branch migration. The fact that hRPA has a positive stimulatory effect on the activity of BLM suggests that this protein may bind the DNA in a particular mode that facilitates the resolution of HJ structures even in those cases where a preformed, branch-migratable junction is absent. Collectively, these findings point to an additional 4 V. Popuri and A. Vindigni, unpublished data. important difference between BLM and RECQ1, in that whereas BLM can only unwind HJ structures lacking a homologous core in the presence of hRPA, RECQ1 is able to disrupt such an immobile X-junction even in the absence of hRPA. The different activities of RECQ1 and BLM toward HJ structures indicate that these two enzymes may play different roles in the resolution of HR intermediates in vivo. Structural Domains Responsible for the Distinct Substrate Specificity of RECQ1 and BLM-The unwinding studies described above emphasize a number of significant differences between the substrate specificities of RECQ1 and BLM. The fact that RECQ1 is unable to resolve G-quadruplexes, RNA-DNA hybrid structures, and plasmid-based D-loops lacking a 3Ј ssDNA tail or to substitute for BLM in the regression of model replication forks indicates that there must be some key structural features that distinguish the catalytic domain of RECQ1 from that of the other RecQ enzymes. For example, the previous observations that BLM, WRN, Sgs1, and E. coli RecQ can all unwind G4 substrates led to the conclusion that the domain responsible for this activity must be conserved among these proteins (23)(24)(25)36). These studies, along with the observation that a mutant of the Saccharomyces cerevisiae Sgs1 helicase lacking the N-and C-terminal portions is still able to promote G4 DNA unwinding, led to the suggestion that the determinants necessary for the recognition and unwinding of G-quadruplex DNA must reside in the central "catalytic core" region of RecQ helicases comprising the helicase and the RecQ-conserved domains (24). DNA binding experiments with trun-cated forms of BLM confirmed that the RecQ-conserved domain of RecQ helicases is involved in the specific recognition of G4 DNA structures (36). However, RECQ1 contains the RecQ-conserved domain and yet is still unable to unwind G4 DNA. In this regard, the recent crystal structure of human RECQ1 shows that the relative position and orientation of the zinc-binding motif and the winged helix that form the RecQ-conserved domain of RECQ1 are different from those of E. coli RecQ (Protein Data Bank code 2V1X). Moreover, the RECQ1 structure shows a prominent ␤-hairpin, with a tyrosine residue at the tip, located in the wing of the winged helix domain that is much shorter in the structures of the winged helix domains of E. coli RecQ and WRN (55,56). This hairpin might play an important role in DNA strand separation, as already suggested for other helicases of the SF2 family (57,58), and in the definition the substrate specificity of RECQ1. Other important domains for the distinct substrate specificity of RecQ helicases might reside in the N-and C-terminal regions of the proteins that have diverged in sequence between the different RecQ enzymes. For example, RECQ5␤ has a significantly shorter N-terminal domain compared with RECQ1 and BLM, exists as a monomer in solution, and has a reduced unwinding activity compared with RECQ1 and BLM, suggest- ing a possible contribution of the N-terminal region in DNA unwinding (13). On the other hand, the C-terminal region of certain RecQ helicases contains an additional helicase-and-RNase D C-terminal domain that plays a role in DNA binding and is missing in RECQ1. Previous studies demonstrated that a key lysine residue, which resides in the helicase-and-RNase D C-terminal domain of BLM, is required for the double Holliday junction dissolution activity of this helicase (59). Moreover, a 60-amino acid region that lies adjacent to the helicase-and-RNase D C-terminal domain of BLM is essential for the strand annealing activity of BLM and might be also required for higher order oligomer formation (12). To test the functional and structural roles of the N-and C-terminal domains of RECQ1, we engineered two deletion mutants of RECQ1 lacking either the first 56 (RECQ1-(57-649)) or the last 69 (RECQ1-(1-579)) residues. Unwinding experiments using a forked duplex substrate indicate that both mutants are able to fully unwind this substrate with an efficiency similar to that of the full-length protein (Fig. 9). However, the same experiments repeated using the two forms of HJ structure described above demonstrate that RECQ1-(57-649) is unable to resolve these structures regardless of the presence or absence of the homologous core sequence. RECQ1-(1-579) can still unwind these HJ substrates, even if to a somewhat lesser extent compared with the fulllength RECQ1. Our previous studies demonstrated that RECQ1 exists in two quaternary structures: higher order oligomers consistent with a pentamer or a hexamer and a smaller species consistent with a monomer or dimer (35). Interestingly, size exclusion chromatography experiments indicate that, although the RECQ1-(1-579) mutant behaves like the wild type protein, the RECQ1-(57-649) protein elutes as a single peak corresponding to the smaller oligomeric form of RECQ1, suggesting that the N-terminal domain of RECQ1 is required for higher order oligomer formation (Fig. 9C). Collectively, our mutagenesis studies indicate that the N-terminal region, or the higher order oligomers formed as a result of its presence, is crucial for the HJ resolution activity of RECQ1. Interestingly, previous studies demonstrated that the same region is not required for the ability of BLM to disrupt HJ substrates, suggesting that the two enzymes use different domains to bind and resolve HJ structures (30). Conclusions-Collectively, our findings show that RECQ1 has a distinct substrate specificity compared with BLM (and WRN), providing a strong indication that these helicases are likely to perform nonoverlapping functions in cells. Our results on the preference of RECQ1 for HJ substrates are consistent with a role of RECQ1 in HR repair. This would be also consistent with the recent finding that endogenous RECQ1 is associated with the strand exchange protein Rad51 and that depletion of RECQ1 results in spontaneous ␥-H2AX focus formation and elevated sister chromatid exchanges (60). Previous studies indicated that BLM might be involved in both early and late steps of HR (32,53). Several lines of evidence, including the results of this work, suggest that RECQ1 and BLM play different roles in the resolution of HR intermediates. In particular, BLM is the only human RecQ helicase able, in conjunction with topoisomerase III␣ and RMI1, to promote the dissolution of the socalled double Holliday junction structures that can form in the late step of HR (53,61). Moreover, a specific function of RECQ1 in HR is supported by the analysis of embryonic fibroblasts from RECQ1-deficient mice that are hypersensitive to ionizing radiation and show an increased level of DNA damage and sister chromatid exchanges, indicating that the absence of RECQ1 cannot be compensated for by the presence of BLM (62). A recently published study indicates that endogenous DNA damage that remains unrepaired in cancer cells due to RECQ1 silencing induces cancer cell-specific mitotic catastrophe. Those authors suggest that RECQ1 might play an important role in the regulation of mitotic cell death in cancer and that this helicase might be a suitable target for the development of new chemotherapeutic agents (63). Although much progress has been made and it is now clear that RecQ enzymes do not simply play redundant roles in cells, the challenge of understanding the unique cellular function of RECQ1 in HR or in other DNA metabolic processes is still open. The protein species were detected by protein fluorescence ( ex ϭ 290 nm and em ϭ 340 nm) following a previously described procedure (35). The peak at 9.5 ml corresponds to a calculated molecular mass of ϳ400 kDa, whereas the peak at 11.5 ml corresponds to a calculated molecular mass of ϳ155 kDa. B, top, unwinding experiments using different concentrations of RECQ1-(1-579) and the forked duplex (f), 4-X12 (E), and 4-X0 (‚) substrates. Bottom, size exclusion chromatography profile of RECQ1-(1-579). C, top, unwinding experiments using different concentrations of RECQ1-(57-649) and the forked duplex (f), 4-X12 (E), and 4-X0 (‚) substrates. Bottom, size exclusion chromatography profile of RECQ1-(57-649).
7,816.2
2008-06-27T00:00:00.000
[ "Biology" ]
Experimental investigation to thermal performance of different photo voltaic modules for efficient system design https://doi.org/10.1016/j.aej.2022.06.037 1110-0168 2022 THE AUTHORS. Published by Elsevier BV on behalf of Faculty of Engineering, Alexandria University This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Saad Ur Rehman , M. Farooq *, Adnan Qamar , M. Usman , Gulzar Ahmad , M. Sultan , M. Wajid Saleem , Ijaz Hussain , M. Imran *, Qasim Ali , M. Yasar Javaid , Farrukh A. Siddiqui h,* Introduction Every year a huge amount of revenue is spent to buy crude oil to fulfill energy demand that put a heavy burden on the economy of the any country.Abrupt increase of energy demand and depleting conventional fossil fuels has drawn attention of investors, direct buyer as well as the researcher to shift on renewable energy and start the projects to utilize hydel power, biomass gas, ocean tidal wave, solar power energy and wind energy.This will not only fulfil the current needs of the society but will also minimize the environmental contamination caused by conventional energy sources, i.e. crude oil, natural gas and coal [1][2][3][4][5][6][7][8].In the present era, the energy shortage is the most common and important issue.Due to unbalance in demand and supply, energy deficit has been increasing [9][10][11][12][13][14][15], leading towards the economic shutdown gradually.At the moment the setup electric power production is approximate 23,500 megawatt (MW) and energy shortfall lies in between 3000 MW and 6000 MW in peak hours [16].During summer season energy deficit reaches to its maximum level which leads the country under severe load shedding of several hours, in industrial, metropolitan and remote places [17][18][19][20][21]. Approximate 175,800 Giga watts (GW) power was evaluated over the total area under the 5.3 kWh/m 2 mean daily solar irradiances, which is equivalent to the 19 MJ/m 2 per year that show the strong potential of solar energy in this region consuming only 0.001% of this total solar energy to produce electric power which is equivalent to 16-18 Giga watt (GW) [16]. Renewable energy resources are globally used, however solar energy has its own significance and a fraction of energy coming from sun could solve all energy problems of the country.Solar energy is very fair and clean which causes no pollu-tion.Now it is time to invest in solar power projects that is crucial and is the need of the hour [22][23][24][25][26]. Performance analysis and the variations estimated in the measured data are crucial to know for the prediction of best PV module in any particular weather conditions [27][28][29][30].Different research scholars have evaluated the PV performance in different climates.Conventional energy sources are reducing, and energy demand is still increasing day by day, which created a very serious energy short fall in the country.The country has become distinguished among the other countries across the globe due to the availability of long sunshine hours suitable for photovoltaic implementation for power production.It has the potential to fulfil the energy needs of domestic, rural, industrial and irrigation sectors [31][32][33][34][35][36][37]. The aim of this study is to focus on the outdoor characterization of three commercially available PV modules in a tropical climate and to examine the performance ratio (PR) with of different performance characteristics.Each module was tested with the IV curves of Copper Indium Gallium di Selenide (CIGS) PV modules, c-Si, and p-Si modules by availing the facility of monitoring system under real time outdoor conditions.The data was collected hourly between 9 am to 5 pm for alternative days of all the months of year.The solar irradiance was recorded after every 10 min that represent 28,000 IV curves of data set for 240 points each.To attain this objective, study is established to determine the module efficiency, performance ratio, power output and temperature effect on the performance of commercially available PV modules.This paper also presented PV system designers on the variation of these coefficients in the field and the fill factor with combination of irradiance and temperature.In general perspective, the wafer technologies performs better than the thin film technol- ogy, but on the other hand thin-film technology was found to be degradation [38].Investigation and experimental study were performed in Malaysia region for the interpretation of the efficiency of four certain types of selected solar panels which belonging to copper indium diselenide, poly-crystalline silicon, monocrystalline silicon, crystalline silicon module with laser grooved buried contact, and amorphous silicon (a-Si) in outdoor weather conditions.This study defined that behavior of poly and mono-crystalline performed with high efficiency at high degree of temperature and amorphous silicon produced well results of efficiency in partially sunny and cloudy weather [39].Experimental study was conducted for more than a whole year in the moderate and less harsh temperature climate region of Perth, located in continent of western Australia, Measurements made significant difference among the selected PV modules technology Copper indium diselenide, poly-crystalline silicon, mono-crystalline silicon, crystalline silicon module with laser grooved buried contact, amorphous silicon (a-Si) and measurements result showed that triple junction a-Si generate 14-15% more power in summer season under diffuse sun irradiance and 8 percent in cool winter season than other modules used in this study [40].Experimental study was carried out for the winter season in the climate of Taxila.Performance ratio (ṔR) for selected PV modules (mono-crystalline, polycrystalline and amorphous silicon) was calculated and the results concluded that mono-crystalline module produced 13 percent efficiency (ὴ) in partially sunny and cloudy weather.Amorphous silicon module generates 1.03 mean performance ratio (ṔR) which was significantly more than other modules under the same environment factors where experiment was performed.Furthermore, Module efficiency (ὴ) and performance ratio was declined when temperature of the module gradually increase [41]. Temperature is a cognitive factor that tend to have an effect on the resultant output of PV modules [42][43][44][45][46][47][48].The temperature increases on the surface of panel effects its performance and thermal properties.This paper examines the effect of heat on the parameters linked with PV power, output performance and efficiency.Photovoltaic modules are unique to handle the global energy crisis.High operating temperature effect on each module can be evaluated at real operational outdoor conditions by availing the facility of monitoring system.Efficient PV module cooling technique have to be used for this purpose to improve system performance.The most common processes are heat pipe cooling,forced, hydraulic cooling, thermoelectric cooling, natural air cooling and cooling with PCM heat pipe cooling [49]. Different researchers have observed that intense range of temperatures exert an influence on the results of PV panel system.The study of different researchers describes the methodology, experimental setup and procedure to evaluate the module electrical and thermal parameters at both standard test conditions and outdoor conditions.PV module thermo-grapy examined before the production of PV panels showed that calculated values remain within the manufacturer's purposed values [50][51][52].Experimental study conducted for two years in the tropical region of Japan.Investigation showed that seasonal variation put a vigorous influence on the efficiency of photovoltaic system, by the source of this study amorphous silicon module gave well results of output power (P) and efficiency (ὴ) than polycrystalline silicon in summer [53].Same research study was conducted for whole year in the cool region of Norway.This study monitored the performance of solar panels which had different type of material and easily available in market.It showed that mono-crystalline exhibit much more well performance outcome than poly-crystalline and amorphous silicon modules [54]. The analysis of soil accumulates which change the efficiency of the solar PV module.For this purpose, 70 days' experiments were performed during outdoor condition.The results revealed that the output performance is reduced to 22 percent.After a continuous ten weeks of dust assertion on PV module, mass to volume ratio of dust on the surface from 0.01 g/m 2 at the beginning but it was gradually changed to 6 g/m 2 at the last day of this experimental study [55].The transmission coefficient and energy conservation performance were reduced caused by dust assertion on the exterior upper layer of PV system [41,51,[56][57][58][59][60][61].The rate of reduction in the output efficiency of a crystalline PV system is directly corresponding to quantity of dust assertion on the surface, is equal to 25% per micrometer.It is very crucial to understand that the environmental conditions highly influence the behavior of PV module efficiency [62].Photo degradation is the process in which the efficiency and the performance of amorphous silicon tend to show reduction in photoconductivity of photovoltaic material when it is exposed to light, this is also called light induced degradation (LID) [63].Annealing is another process in which temperature that is greater than 150 is involved to degradation of PV modules silicon modules [64].Different PV module evaluated in the tropical climate region of Indonesia and compared the performance of three certain type of PV modules in a moderate open-air weather.Module performance ratio, efficiency and normalized energy yields examined thoroughly of every module and investigate the thermal and solar irradiances effect.Mono-crystalline showed healthier results of module output efficiency and also good source of power production at moderate outdoor condition.Module efficiency and performance ratio was declined when temperature of the module gradually increase [65]. This research work consists of experimental investigation, evaluation and analyses for the thermal performance of different PV module for efficient system design under varying weather conditions.After every 20 min, approximately 30% solar irradiance reaches the earth which is enough to meet the energy demands of the global community.However, making efficient use of solar energy is not an easy task.This way allowed the maximum capturing of solar irradiance [66].The data was collected hourly from 9 am to 5 pm after every 30 s for alternative days of all the months of year and sometime experiment had to stop due to rainy and stormy weather.In this study solar radiation values calculated through pyranometer for each PV modules.The solar irradiance was recorded after every 30 s. K-type thermocouples were installed at the central cell of each PV module to examine the overall module temperature.All the measured parametric values were recorded through data monitoring system.The data was collected for varying conditions including sunny, partially cloudy and cloudy days.PV modules used in this study expressed different behavior on different climate conditions.To attain this objective, a comparative study is established between module efficiency, performance ratio, power output and temperature effect to calculate the performance of commercially Photovoltaic modules like thin plate Copper indium diselenide, poly-crystalline silicon, mono-crystalline silicon, micro crystalline silicone and amorphous silicon for complete year. Experimental setup and data collection Photovoltaic module of five dissimilar materials (thin plate Copper indium diselenide, poly-crystalline silicon, monocrystalline silicon, micro crystalline silicone and amorphous silicon) were analyzed in this study.The experimental setup consists of PV modules, a variable load resister, digital pyranometer, ammeter and voltmeter, data logger for data mongering system and.K-type thermocouples (OEM WRR2-130) to calculate the average temperature of PV modules.A clarified form of experimental setup is shown in Fig. 1.PV modules (Canadian solar system,) were used to capture the solar radiations from natural sun.Ammeter (Fluke 77-IV) having DC current range from 0.001 A to 10 A with direct current accuracy of 1.0% +3 and voltmeter (Fluke 87-V) with voltage range from 0.001 V to 1000 V and accuracy of 0.09% +2.34 were used to compute the photovoltaic current and voltage respectively.A variable load resister (3250 VRL) was used as a series resistance or potentiometer with bearing current up to 5 A. Five thermocouples were installed at the back of five dissimilar material PV modules to take the average temperature.Ambient temperature was also computed through the sixth k-type thermocouple.Solar irradiance values were recorded though a pyranometer (SR30-D1), while the rest of data was executed by the data acquisition system. The output efficiency of different photovoltaic modules strongly depends on perfect orientation, positioning and tilt angle of photovoltaic modules.The required tilt angle of photovoltaic modules is different for different positions and seasonal conditions.For maximum output power, a large tilt angle is required in winter season and comparative small tilt angle in summer.In the existing literatures many researchers have reported that the tilt angle of a fixed PV module is equal to the latitude angle of that place.If adjustment of PV module is needed twice in a year (summer winter and seasons), then angle of inclination or tilt angle was adjusted according to well-known rule as presented in equation ( 1) and ( 2) [41,51,67]. The latitude of experimental setup location was 31.69°.Using equation (1) and equation (2) tilt angle of 16.69°and 46.69°were observed during summer and winter respectively.It has been seen that tilt angle mostly depends upon the time of year.In the Fig. 2, the PV module is directly perpendicular to the solar irradiance angle (h).Increase in the output power of module is achieved with decrease in the angle (h) and modules have maximum power when i.e. sun is at normal to the face of PV module.E represents the solar irradiance reaching to the module surface. Experimentation was executed in the outdoor weather conditions.The modules were placed at the roof top at a fixed tilt angle of 16.69°for summer season and 46.69°for winter season with horizontal plane as per latitude of 31.69°[41,51].The characteristic parameters of PV modules were obtained from I to V curve, drawn by using variable resistance and multimeters, the data acquisition system calculated module power output from the module voltage and module current measurements that are made each second.The performance related all parameters are calculated every second by using equations on the site where setup was installed at required tilt angle and then averaged over hourly intervals. PV modules rated values and parameters analysis The study of different researchers describes the methodology, experimental setup and procedure to evaluate the module electrical and thermal parameters at both standard test conditions and outdoor conditions.The performance related parameters are calculated using equations ( 3)-( 7) [40,51]. Power Output. Normalized Power out put. Module Efficiency. Performance Ratio. PR ¼ ðPmax=PmaxðSTCÞÞ=ððI=1000ÞÞ ð7Þ The typical conditions at which different PV modules are rated on bases of different variables is called standard test conditions that are shown in Table 1.Actually, standard test conditions (STC) express the industry-wide standard values of performance of different PV modules and indicate an irradiance of 1000 W/m 2 and cell temperature of 25 °C at room with an air mass (AM 1. 5).Natural sun is operated under such conditions that it produces natural light source to test thermal performance of different PV modules.Generally, manufacturer of PV panel proposed the standard test conditions (STC) values for different PV modules.The information given in Table 1. at standard test conditions (STC) includes maximum photo- Solar irradiance measurement analysis Finest way is to use solar energy with assistance of PV modules, which directly capture and convert solar radiations into the direct current using equation (3) that is resulted to produce power that is solution to fulfill energy demands.As shown in Figs.3a and 3b, the average value of solar irradiance through-out the year were evaluated with the help of pyranometer.The intensity of solar heat decreases with the reduction in solar irradiance.A maximum absorption rate was computed in the month of June (664.59W/m 2 ) was due to more sun shine hours and high solar intensity, December showed (240.94W/m 2 ) lowest solar irradiance value due to fogy and moisture conditions of the weather that decreased the solar absorption intensity as compared with others months. In Fig. 3c.solar irradiance value for month of Jun was expressed from 9:00 am to 5:00 pm.Highest average irradiances value 834.23 (W/m 2 ) was observed at 1:00 pm and lowest average solar irradiance 360.19 W/m 2 was examined at 5:00 pm.Solar irradiance intensity from morning 9:00 am to 12:00 pm was increased to reach it maximum peak value at 1:00 pm but afterward its valued was reduced (after noon 2:00 pm to 5:00 pm).The greater and lower average value of solar irradiance in June was because of high and low solar intensity during different hours of the days. Maximum power output analysis Maximum power output is the major key parameter to compute the thermal performance of photovoltaic modules at different outdoor conditions under natural sun.Usually currentvoltage curve (I-V curve) is used to express the value of maximum power output.Here, in this research study, high power variable resistance is used for calculating maximum power point by using equation ( 4 solar irradiation of all the months of year.All five modules have liner trend in start with the solar irradiance as the solar irradiance increases, maximum power output of all module showed incremental trend but it declined in the evening as solar irradiance decreased.That mean PV modules produced maximum power at the maximum solar irradiance due to the high solar energy absorption and high thermal conductivity.The maximum power output values of (thin plate Copper indium diselenide, poly-crystalline silicon, mono-crystalline silicon, micro crystalline silicone and amorphous silicon) were observed 20.6 W, 80.4 W, 84 W, 19.1 W and 17.8 W correspondingly in overall year.Mono-crystalline silicon has the highest value of maximum power output in month of June relative to other PV modules shown in Fig. 4b. Module efficiency analysis The module efficiency is computed through the formula and is the ratio of maximum output power of module to the total incident solar radiation.This total incident is multiply by active area of the module [16,19]. Figs. 5a and 5b represented, module efficiency value of different PV modules (thin plate Copper indium diselenide, polycrystalline silicon, mono-crystalline silicon, micro crystalline silicone and amorphous silicon) for whole year.It is examined that module efficiency of different types of PV modules are directly linked with maximum power output and inversely related with module temperature [43,[68][69][70]. Fig. 5a showed that different type of PV modules has liner decreasing trend in the beginning with the increase in solar irradiance, PV module efficiency showed decrement trend then incremental trend in the evening as soar irradiance decreased.PV module efficiency of (thin plate Copper indium diselenide, poly-crystalline silicon, mono-crystalline silicon, micro crystalline silicone and amorphous silicon) were observed 11.6 %,18.6 %, 20.8 %, 7.7 % and 5.2 % respectively in overall year is shown in Fig. 5b.Mono-crystalline silicon exhibit the highest efficiency as compared to other PV modules due to quality of high solar absorptivity property, high temperature gaining property and higher value of thermal conductivity. Performance ratio analysis Performance ratio is used conventionally to. sexpress the relation between maximum output power of the photovoltaic modules calculated at outdoor condition relative to product of Irradiance and maximum out power of the photovoltaic modules computed at standard test conditions (STC).It can calculate the performance efficiency of any type of module using equation ( 7) that can be proved more helpful to finding [19]. Performance ratio showed decrement trend with the increase of module temperature that can be resulted as degradation of PV module in last [69,[71][72][73], it can be clearly seen in Fig. 6a.It can be seen that performance ratio of thin plate Copper indium diselenide, poly-crystalline silicon, mono- 5b Average values of module efficiencies for different PV modules for whole year.crystalline silicon, micro crystalline silicone and amorphous silicon 1.09, 1.16, 1.21, 1.03, 0.96 respectively in Fig. 6b.Comparing all the PV modules used, mono-crystalline silicon PV module has the maximum performance ratio (PR) at different test conditions. Module temperature analysis Fig. 7a expressed the average value of temperature for whole year computed through K-type thermocouple (TC 6 ).A maximum average value of temperature was examined at the month of June because of high solar intensity and more sun shine hours and, December showed lowest value due to fogy and moisture conditions of the weather.Fig. 7b shows the temperature variations of five types of PV modules (thin plate Copper indium diselenide, poly-crystalline silicon, mono-crystalline silicon, micro crystalline silicone and amorphous silicon) under natural sunshine.Comparing value of average temperatures of all the PV modules used in this work, mono-crystalline silicon PV module has the highest average value of temperature among all in the month of June due to highest solar absorption than others PV modules.When it comes to compare with all the photovoltaic modules temperature, ambient temperature is lower than module temperature in whole year.The module temperature reduction is also observed due to sudden decrease of solar irradiance.Furthermore, temperature shows direct proportionality with the solar irradiance [74][75][76].Sometime solar cell exhibited greater temperature value than ambient although the intensity level of solar irradiance was not very much.This is because the more energy produces when solar light converted in the heat by stacking in the surface of solar panel which was made of glenzing material. Normalized power output analysis Normalized power output efficiency is the ratio of maximum output power of the photovoltaic modules calculated at outdoor condition to maximum output power of the photovoltaic modules computed at standard test conditions (STC).The normalized output power efficiency was calculated by using formula discussed earlier in equation ( 5) [19].Equation (5) shows the relation of normalized power out, where gp is normalized output power efficiency,Pmax is maximum output power at outdoor condition measured in (W) and Pmax STC ð Þis maximum output power at standard test conditions also measured in Watt.Normalized power output efficiency of different PV modules (thin plate Copper indium diselenide, poly-crystalline silicon, mono-crystalline silicon, micro crystalline silicone and amorphous silicon) were examined 50.27 %,54.03%, 56.2 %, 47.54 % and 44.38 % respectively in overall year is shown in Figs.8a and 8b Monocrystalline silicon exhibit the highest normalized power output efficiency as compared to other PV modules due to quality of high solar absorptivity property, high temperature gaining property and higher value of thermal conductivity. Performance analysis results data and their varying behavior are very crucial to know for prediction of best PV module in any particular weather conditions [27][28][29][30].Comparison of all parameters of the selected modules can be examined after the optimization in Table 2 over the whole year.These operational results of all systems by analyzing and comparing their performances under the outdoor weather conditions.The results shown help to understand the operation of different PV systems to conclude the best one material under specific outdoor weather condition and also at STC standard test conditions.Normalized power output efficiency of different PV modules (mono-crystalline, polycrystalline, amorphous silicon, thin plate Copper indium diselenide, micro crystalline) were examined 56.2 %, 54.03 %, 44.38 %, 50.27 % and 47.54 % respectively in overall year.But Mono-crystalline silicon PV module proved to be distinguished due to its good solar absorptivity rate.PV module efficiency of (mono-crystalline, polycrystalline, amorphous silicon, thin plate Copper indium diselenide and micro crystalline silicone) were observed 20.8 %, 18.6 %, 5.2 %, 11.6 % and 7.7 % respectively in overall year. Conclusion Solar energy capturing by PV modules technology is persuading approach for producing solar power.In this research study, the thermal performance and thermal conversion efficiency of five photovoltaic modules (thin plate Copper indium diselenide, poly-crystalline silicon, mono-crystalline silicon, micro crystalline silicone and amorphous silicon) is examined experimentally under natural sun and outdoor conditions.The contribution of irradiance absorption rate, maximum power output, module efficiency, performance ratio, module temperature and normalized power output efficiency are disclosed in the perspective of thermal performance.Experimental based outcomes described that.All PV modules expressed good solar energy absorption rate, higher module temperature attains, higher module efficiency, higher performance ratio and higher normalized power output efficiency.Average normalized power out of mono-crystalline silicon showed better result 56.2% than other modules.As before mono crystalline calculated 20.8% module efficiency and 1.21 in performance ratio.Mono-crystalline silicon PV module has 5% to 10% higher performance ratio (PR) and power output than others PV modules at different test conditions in this study.Compared the outcome results with the each other in this experiment study, the growing trend and order of PV modules with regard to thermal performance is mono-crystalline, polycrystalline, thin plate Copper indium diselenide, micro crystalline silicone and amorphous silicon.Mono crystalline is famous for its high module efficiency and high absorptivity due to it make from thin wafers of a single silicon crystal with each silicon atom bonded with four neighboring atoms which periodic across the whole crystal.That enhance the capability of better perform in low light condition and having high PR, Mono crystalline is found to be most suitable and should be preferred in renewable energy to make it dominant with solar energy and mitigate energy import bills and the generation of CO 2 in the atmosphere. This research duly concerns the photo thermal properties of different photovoltaic modules and applications require further research. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 1 Fig. 1 Positioning of PV module on tilt angle. Fig. 2 Fig. 2 Experimental setup of solar PV system. Fig. 3a Solar Irradiance average values for twelve months of whole year. Fig. 4a Fig. 4a Average maximum output power trend of different PV modules for twelve months. Fig. 4b Fig. 4b Average values of maximum power output for different PV modules for whole year.Fig.5a Average module efficiency of different PV modules for whole year. Fig. 5a Fig. 4b Average values of maximum power output for different PV modules for whole year.Fig.5a Average module efficiency of different PV modules for whole year. Fig. 6a Fig. 6a Average performance values of different PV modules for whole year. Fig. 6b Fig. 6b Average values of performance ratio for different PV modules for whole year. Fig. 7a Fig.7aAverage ambient temperature of every month for the whole year from 9:00 AM to 5:00 PM. Fig. 7b Fig. 7b Average modules temperature and ambient temperature for whole year. Fig. 8a Fig. 8a Average normalized output efficiency for whole year. Fig. 8b Fig. 8b Average normalized output efficiency value of different PV modules for whole year. Table 1 Physical dimension and specification of different photovoltaic modules. Table 2 Comparison after optimization of different PV modules for whole year.
5,980.8
2022-12-01T00:00:00.000
[ "Engineering" ]
Steep-Slope and Hysteresis-Free MoS2 Negative-Capacitance Transistors Using Single HfZrAlO Layer as Gate Dielectric An effective way to reduce the power consumption of an integrated circuit is to introduce negative capacitance (NC) into the gate stack. Usually, negative-capacitance field-effect transistors (NCFETs) use both a negative-capacitance layer and a positive-capacitance layer as the stack gate, which is not conductive to the scaling down of devices. In this study, a steep-slope and hysteresis-free MoS2 NCFET is fabricated using a single Hf0.5−xZr0.5−xAl2xOy (HZAO) layer as the gate dielectric. By incorporating several Al atoms into the Hf0.5Zr0.5O2 (HZO) thin film, negative capacitance and positive capacitance can be achieved simultaneously in the HZAO thin film and good capacitance matching can be achieved. This results in excellent electrical performance of the relevant NCFETs, including a low sub-threshold swing of 22.3 mV/dec over almost four orders of drain-current magnitude, almost hysteresis-free, and a high on/off current ratio of 9.4 × 106. Therefore, using a single HZAO layer as the gate dielectric has significant potential in the fabrication of high-performance and low-power dissipation NCFETs compared to conventional HZO/Al2O3 stack gates. Introduction As the size of MOSFETs continues to decrease, it has become increasingly difficult to reduce the size of the device below 10 nm. One reason for this is that the power consumption of the device is difficult to reduce. In traditional MOSFETs, the sub-threshold swing (SS) of the device is limited by the Boltzmann limit, resulting in the chip power consumption failing to reach the expected target [1][2][3][4][5]. To reduce the device power consumption, several novel device models have been proposed to achieve a sub-threshold swing (SS) lower than 60 mV/dec at room temperature. The first type proposed is the nano-electromechanical switch (NEM) [6,7]. The NEM is small and low in power consumption. Its SS can even reach single digits, to much lower than 60 mV/dec. However, the reliability of NEM devices is relatively low and the lack of expandable preparation technology is not conducive to large-scale process production. Another low-power device model is the tunnel field-effect transistor (TFET) [8][9][10][11]. In 2017, Adrian Jonescu proposed using the quantum tunneling effect to make TFETs. TFETs can significantly reduce the driving voltage imposed by the gate and have a very low turn-off current. Their SS can be lower than 60 mV/dec, with greatly reduced power consumption. However, the turn-on current is relatively low and the choice of materials, and the presence of defects in these, significantly impact the development of TFETs. The negative-capacitance field-effect transistor (NCFET) is considered an ideal choice for ultra-low power applications [1,2,4,[12][13][14]. It only requires addition of a ferroelectric material as an additional layer to the gate dielectric layer of a traditional MOSFET to act as a negative capacitor to achieve voltage amplification of the channel surface potential, thus breaking the Boltzmann limit such that an SS less than 60 mV/dec can be achieved. At the same time, materials with high dielectric constants, such as HfO 2 and ZrO 2 , having much higher dielectric constants than SiO 2 , wide band gaps and good thermal stability, are considered excellent gate dielectric materials and have been applied in process mass production. However, in Germany, Johannes Muller et al. reported that ferroelectric properties would appear when different elements, such as Si, Y, Al and Zr, were doped into HfO 2 thin films, [15][16][17]. This discovery resulted in ferroelectric materials and NCFET becoming an international research craze [18][19][20][21]. Using HZO thin film as the ferroelectric layer, HZO NCFETs have become one of the solutions to achieving ultra-low-power CMOS technology. HZO NCFETs have the following advantages: (1) high conduction current; (2) symmetrical circuit layout; and (3) CMOS process compatibility. However, these NCFETs contain at least two layers as the gate stack, which complicates the fabrication process and is not conducive to the scaling down of devices. In this investigation, a single-layer (HfZr) 0.5−0.5x Al x O y (HZAO) thin film was found to have both negative capacitance and positive capacitance due to its ferroelectricity, and, thus, capacitance matching could be realized by itself. A single HZAO layer was used as the gate dielectric to fabricate the relevant MoS 2 NCFETs. By optimizing the Al content and the annealing temperature of the HZAO thin film, excellent electrical properties for NCFETs have been achieved, including a low SS of 22.3 mV/dec over almost four orders of drain-current magnitude, almost no hysteresis, and a high on/off current ratio of 9.4 × 10 7 , which, so far, is better than that achieved with HZO-based MoS 2 NCFETs under the same fabrication and measurement conditions. Experimental Method Heavily-doped p ++ -Si wafers with a resistivity of 0.005~0.01 Ω·cm were cleaned by a standard RCA method as substrate/back gate and then were placed in an atomlayer deposition (ALD) chamber. As the ferroelectricity of the HZAO thin film is highly dependent on its annealing temperature and atom ratio [19,22,23], HZAO thin films with different atom ratios were prepared through alternately ALD-depositing Al 2 O 3 , HfO 2 and ZrO 2 at ratios of Al 2 O 3 :HfO 2 :ZrO 2 = 1:5:5 or 1:10:10 or 1:20:20 or 0:1:1 to yield a nanolaminate Hf 0.4 Zr 0.4 Al 0.2 O y or Hf 0.45 Zr 0.45 Al 0.1 Oy or Hf 0.475 Zr 0.475 Al 0.05 Oy or Hf 0.5 Zr 0.5 O y (HZO) thin film on the p ++ -Si substrate using trimethylaluminium (TMA) as the Al source, tetrakis (ethylmethylamino)-Hf (TDMAH) as the Hf source, tetrakis (ethylmethylamino)-Zr (TDMAZ) as the Zr source, and H 2 O as the oxidant. During deposition, the temperatures of the substrate and the Hf/Zr sources were 200 • C and 75 • C, respectively. The resultant thicknesses of the thin films were 8.03 nm, 8.01 nm, 8.03 nm, and 8.00 nm, respectively, measured by ellipsometry. The thin films were then annealed by rapid thermal processing (RTP) at 600 • C, 650 • C, 700 • C and 750 • C, respectively, for a duration of 30 s. MoS 2 flakes were transferred from bulk crystal onto the HZAO or HZO/p ++ -Si substrates by a micromechanical exfoliating method using 3M tapes and PDMS films [24]. Electron-beam lithography (EBL) was used to pattern the source (S) and drain (D) electrodes of the transistors, followed by deposition of 15 nm Cr and 45-nm Au at room temperature by thermal evaporation and a lift-off process to form the S/D electrodes [25][26][27]. The drawn channel length (L) of these transistors was 3 µm and the channel width (W) was 3~5 µm, based on the shape of the flakes. Lastly, the transistors were annealed at 300 • C for 180 s in a N 2 environment to improve the electrical contact between the MoS 2 and metal electrode and to remove the gas and liquid molecules on the MoS 2 [28]. A schematic diagram of a back-gated NCFET is shown in Figure 1a and its optical micrograph is shown in Figure 1b Optimization of Al Content in HZAO Thin Film The polarization vs. field (P-E) measurements were carried out on the (Au/Cr)/HZAO/p ++ -Si capacitor to characterize the ferroelectricity of the HZAO thin films at a frequency of 50 Hz at room temperature in light-tight condition, with Au/Cr as the top electrode and p ++ -Si as the bottom electrode. Figure 2a shows the P-E curves of the Hf0.4Zr0.4Al0.2Oy, Hf0.45Zr0.45Al0.1Oy, Hf0.475Zr0.475Al0.05Oy and Hf0.5Zr0.5O samples; the inset shows the extracted ferroelectric parameters. The Hf0.475Zr0.475Al0.05Oy thin film exhibited the strongest ferroelectricity, with a total remnant polarization (2|Pr|) of 15.22 μC/cm 2 and a Pr/Ec ratio of 9.06 pF/cm (Ec is the coercive field), indicating that the ferroelectricity of the HZO thin film could be enhanced by incorporation of Al with a suitable content (e.g., 5% Al in an HZO thin film). The capacitance-voltage (C-V) curves of the (Au/Cr)/HZAO/p++-Si capacitors were measured by an Agilent 4284A precision LCR meter. Figure 2b shows the C-V curves of the HZAO thin films with different Al contents under 5 kHz frequency. A sharp capacitance peak can be observed around 0 V; the higher peak indicates stronger ferroelectricity and a negative-capacitance effect [29]. For the conventional (Cr/Au)/HfO2/p-Si MOS capacitor in Figure 2c, we did not observe a capacitance peak, which would exclude the impact of stresses or high frequency signals. The Hf0.475Zr0.475Al0.05Oy thin film exhibited the highest capacitance peak, which was consistent with the P-E measurement result, indicating that the strongest ferroelectricity was for the Hf0.475Zr0.475Al0.05Oy thin film. The peaks indicate that the single layer HZAO film had both positive-and negative-capacitance characteristics and was able to achieve capacitance-matching by itself without another positive capacitance layer. The drain current vs. gate-source voltage (Id-Vgs) curves of the transistors were measured using a Keithley 4200 SCS semiconductor parameter analyzer in a light-tight and electrically-shielded probe station in an atmospheric pressure environment at room temperature, as shown in Figure 3a. The sweeping range of Vgs was from −1 V to 3 V, with a sweeping rate of 0.2 V/s and drain-source voltage (Vds) fixed at 50 mV. The SS-Id relations were extracted from the transfer curves and are shown in Figure 3b. Table 1 lists the off-current (Ioff), on-current (Ion), mobility, SS and hysteresis of the HZAO NCFETs with different Al contents, which were extracted from their relevant transfer curves. Optimization of Al Content in HZAO Thin Film The polarization vs. field (P-E) measurements were carried out on the (Au/Cr)/HZAO/ p ++ -Si capacitor to characterize the ferroelectricity of the HZAO thin films at a frequency of 50 Hz at room temperature in light-tight condition, with Au/Cr as the top electrode and p ++ -Si as the bottom electrode. Figure The capacitance-voltage (C-V) curves of the (Au/Cr)/HZAO/p++-Si capacitors were measured by an Agilent 4284A precision LCR meter. Figure 2b shows the C-V curves of the HZAO thin films with different Al contents under 5 kHz frequency. A sharp capacitance peak can be observed around 0 V; the higher peak indicates stronger ferroelectricity and a negative-capacitance effect [29]. For the conventional (Cr/Au)/HfO 2 /p-Si MOS capacitor in Figure 2c, we did not observe a capacitance peak, which would exclude the impact of stresses or high frequency signals. The Hf 0.475 Zr 0.475 Al 0.05 O y thin film exhibited the highest capacitance peak, which was consistent with the P-E measurement result, indicating that the strongest ferroelectricity was for the Hf 0.475 Zr 0.475 Al 0.05 O y thin film. The peaks indicate that the single layer HZAO film had both positive-and negative-capacitance characteristics and was able to achieve capacitance-matching by itself without another positive capacitance layer. The drain current vs. gate-source voltage (I d -V gs ) curves of the transistors were measured using a Keithley 4200 SCS semiconductor parameter analyzer in a light-tight and electrically-shielded probe station in an atmospheric pressure environment at room temperature, as shown in Figure 3a. The sweeping range of V gs was from −1 V to 3 V, with a sweeping rate of 0.2 V/s and drain-source voltage (V ds ) fixed at 50 mV. The SS-I d relations were extracted from the transfer curves and are shown in Figure 3b. From Figure 3 and Table 1, it can be seen that the HZAO NCFETs with different Al contents exhibited similar Ioff and mobility, indicating that the insulation integrity of the HfZrO thin film was not influenced by the incorporation of Al atoms. It is especially worth noting that the Hf0.475Zr0.475Al0.05Oy NCFET (Al content of 5%) exhibited the lowest SS (22.3 mV/dec) over almost four orders of Id magnitude, the largest Ion (2.8 μA/μm) and the smallest hysteresis (~10 mV) among all the samples, i.e., a suitable Al-doped concentration was determined to be 5%, at which a high switching speed from the off-state to the on-state can be obtained. In addition, comparison of electrical properties between the single-layer HZAO and traditional multi-layer gate stack NCFETs was performed, as shown in Figure 3c and Table 2. For the same thickness of the gate stack, the former exhibited better electrical parameters than the latter. According to the formulas of SS = ( 10) × × (1 + Table 1 lists the off-current (I off ), on-current (I on ), mobility, SS and hysteresis of the HZAO NCFETs with different Al contents, which were extracted from their relevant transfer curves. From Figure 3 and Table 1, it can be seen that the HZAO NCFETs with different Al contents exhibited similar I off and mobility, indicating that the insulation integrity of the HfZrO thin film was not influenced by the incorporation of Al atoms. It is especially worth noting that the Hf 0.475 Zr 0.475 Al 0.05 O y NCFET (Al content of 5%) exhibited the lowest SS (22.3 mV/dec) over almost four orders of I d magnitude, the largest I on (2.8 µA/µm) and the smallest hysteresis (~10 mV) among all the samples, i.e., a suitable Al-doped concentration was determined to be 5%, at which a high switching speed from the off-state to the on-state can be obtained. In addition, comparison of electrical properties between the single-layer HZAO and traditional multi-layer gate stack NCFETs was performed, as shown in Figure 3c and Table 2. For the same thickness of the gate stack, the former exhibited better electrical parameters than the latter. According to the formulas of SS = (ln10) × k B T q × 1 + C s +C it C ox and C it = q 2 D it [30], the interface-state density between MoS 2 and positive HZAO was determined to be in a range of (2.75~2.94) × 10 12 eV −1 cm −2 , which would not lead to relatively large hysteresis. The C-V curves of the (Au/Cr)/Hf0.475Zr0.475Al0.05Oy/p ++ -Si capacitors for different annealing temperatures were measured at 5 kHz, as shown in Figure 4b. The Hf0.475Zr0.475Al0.05Oy thin film annealed at 700 °C exhibited the highest capacitance peak, which was consistent with the P-E measurement result, supporting it exhibiting the strongest ferroelectricity, as shown in Figure 4a. The Id-Vgs curves and the SS-Id curves of the Hf0.475Zr0.475Al0.05Oy NCFETs for different annealing temperatures are shown in Figure 5a,b, respectively; the relevant electrical parameters extracted from their transfer curves are listed in Table 2. The C-V curves of the (Au/Cr)/Hf 0.475 Zr 0.475 Al 0.05 O y /p ++ -Si capacitors for different annealing temperatures were measured at 5 kHz, as shown in Figure 4b. The Hf 0.475 Zr 0.475 Al 0.05 O y thin film annealed at 700 • C exhibited the highest capacitance peak, which was consistent with the P-E measurement result, supporting it exhibiting the strongest ferroelectricity, as shown in Figure 4a. The I d -V gs curves and the SS-I d curves of the Hf 0.475 Zr 0.475 Al 0.05 O y NCFETs for different annealing temperatures are shown in Figure 5a,b, respectively; the relevant electrical parameters extracted from their transfer curves are listed in Table 2. From Figure 5 and Table 3, it can be observed that the Hf 0.475 Zr 0.475 Al 0.05 O y NCFET annealed at 700 • C exhibited the lowest SS (20.4 mV/dec) by almost four orders of I d magnitude with a very small anticlockwise hysteresis of −5~−10 mV, attributed to its enhanced ferroelectricity (large P r /E c ratio and |2P r |), as shown in Figure 4a, resulting in a large ferroelectric capacitance (|C FE |) and, thus, a better capacitance match [31,32]. However, the 650 • C-annealed NCFET exhibited the second lowest SS (22.3 mV/dec) without hysteresis. This is because the hysteresis of the MoS 2 NCFETs is influenced by both the interface states and the NC effect. The hysteresis caused by interfacial defaults is clockwise [33] and an anticlockwise hysteresis is introduced by the NC effect of the ferroelectric thin film [1]. So, the total hysteresis for the MoS 2 NCFETs, with HZAO as the gate dielectric, is a sum of the two kinds of hysteresis [34]. Assuming a consistent interface property for the MoS 2 NCFETs annealed at 650 • C and 700 • C, i.e., that the two samples have the same clockwise hysteresis, the total hysteresis depends on their anticlockwise hysteresis. For the 650 • C annealed Hf 0.475 Zr 0.475 Al 0.05 O y NCFET, its clockwise hysteresis just counteracted its anticlockwise hysteresis, resulting in hysteresis-free behavior. For the 700 • C-annealed Hf 0.475 Zr 0.475 Al 0.05 O y NCFET, its anticlockwise hysteresis was slightly larger than its clockwise hysteresis, resulting in a total anticlockwise hysteresis of −5~−10 mV (almost hysteresis-free behavior). Therefore, it is suggested that a reasonable range for the annealing temperature is 650 • C~700 • C. From Figure 5 and Table 3, it can be observed that the Hf0.475Zr0.475Al0.05Oy NCFET annealed at 700 °C exhibited the lowest SS (20.4 mV/dec) by almost four orders of Id magnitude with a very small anticlockwise hysteresis of −5~−10 mV, attributed to its enhanced ferroelectricity (large Pr/Ec ratio and |2Pr|), as shown in Figure 4a, resulting in a large ferroelectric capacitance (|CFE|) and, thus, a better capacitance match [31,32]. However, the 650 °C-annealed NCFET exhibited the second lowest SS (22.3 mV/dec) without hysteresis. This is because the hysteresis of the MoS2 NCFETs is influenced by both the interface states and the NC effect. The hysteresis caused by interfacial defaults is clockwise [33] and an anticlockwise hysteresis is introduced by the NC effect of the ferroelectric thin film [1]. So, the total hysteresis for the MoS2 NCFETs, with HZAO as the gate dielectric, is a sum of the two kinds of hysteresis [34]. Assuming a consistent interface property for the MoS2 NCFETs annealed at 650 °C and 700 °C, i.e., that the two samples have the same clockwise hysteresis, the total hysteresis depends on their anticlockwise hysteresis. For the 650 °C annealed Hf0.475Zr0.475Al0.05Oy NCFET, its clockwise hysteresis just counteracted its anticlockwise hysteresis, resulting in hysteresis-free behavior. For the 700 °C-annealed Hf0.475Zr0.475Al0.05Oy NCFET, its anticlockwise hysteresis was slightly larger than its clockwise hysteresis, resulting in a total anticlockwise hysteresis of −5~−10 mV (almost hysteresis-free behavior). Therefore, it is suggested that a reasonable range for the annealing temperature is 650 °C ~ 700 °C. To support the electrical results, X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS) measurements were undertaken on the HZAO thin films for different Al contents and annealing temperatures to investigate their crystal structure. Figure 6a To support the electrical results, X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS) measurements were undertaken on the HZAO thin films for different Al contents and annealing temperatures to investigate their crystal structure. Figure 6a,b show the XRD patterns of the HZAO thin films with different Al contents and annealing temperatures, respectively. The diffraction peaks occurring at around 30.5 • and 36 • are attributed to the orthorhombic phase [16,35,36]; a higher peak implies an enhanced orthorhombic phase and, thus, stronger ferroelectricity [19,37]. The 5%Al content and 700 • C annealed HZAO thin films exhibited the highest diffraction peaks, indicating the strongest ferroelectricity, which is consistent with the P-E measurement results shown in Figures 2a and 4a. Figure 6c, Hf0.475Zr0.475Al0.05Oy and Hf0.45Zr0.45Al0.1Oy thin films, respectively. The percentage of the Al element in the two thin films was 2.62% and 5.33%, respectively, based on calculation of the Al 3p/O1s peak-area ratio. Similarly, the Hf 4f/O 1s and Zr 3d/O 1s ratios were calculated to be 48.23% and 46.53%, for the Hf0.475Zr0.475Al0.05Oy sample, and 45.36% and 43.98%, for the Hf0.45Zr0.45Al0.1Oy sample. So, their chemical formulae were Hf0.4823Zr0.4653Al0.0524Oy and Hf0.4536Zr0.4398Al0.1066Oy, respectively, which is consistent with the atom ratios of the ALD-yielded HZAO thin films, confirming the validity of the electrical results. Figure 6. XRD patterns of (a) HZAO thin films with different Al contents and (b) Hf0.475Zr0.475Al0.05Oy thin films annealed at 600 °C, 650 °C, 700 °C and 750 °C, respectively; Hf 4f, Zr 3d, Al 3p and Q 1s XPS spectra of (c) Hf0.475Zr0.475Al0.05Oy thin film and (d) Hf0.45Zr0.45Al0.1Oy thin film. Conclusions In this investigation, back-gated MoS 2 NCFETs with a single HZAO layer as the gate dielectric were successfully fabricated. The effects of Al content and annealing temperature on the device performances were investigated. It was found that the 700 • C-annealed Hf 0.475 Zr 0.475 Al 0.05 O y (5%Al content) NCFET exhibited the lowest SS (~20 mV/dec), the highest I on /I off (9.1~9.8 × 10 6 ) and almost hysteresis-free behavior, attributed to its strong ferroelectricity and NC effects. The 650 • C-annealed Hf 0.475 Zr 0.475 Al 0.05 O y NCFET exhibited better comprehensive performances, including low SS (~22 mV/dec), no hysteresis and a high I on /I off ratio [(8.8~9.5) × 10 6 ], attributed to its second strongest ferroelectricity and the excellent balance between anticlockwise hysteresis from the NC effect and clockwise hysteresis from the HZAO/MoS 2 interface. XRD, C-V and P-E measurements were used to confirm the ferroelectricity of the Hf 0.5−0.5x Zr 0.5−0.5x Al x O y thin films with different Al contents and annealed at different temperatures, indicating strong ferroelectricity for the Hf 0.475 Zr 0.475 Al 0.05 O y samples annealed at 650~700 • C, with higher diffraction peaks of the orthorhombic phase and C-V peaks relevant to the NC effect, and large P r . Compared to single HZO layer and gatestacked NCFETs, the single HZAO layer gate-dielectric NCFET has significant potential for small-scale and low-power devices due to its excellent sub-threshold properties and simplified preparation process. Author Contributions: Writing-original draft preparation, X.T.; writing-review and editing, L.L. and J.X. All authors have read and agreed to the published version of the manuscript. Funding: This research is financially funded by the National Natural Science Foundation of China under Grants 61974048 and 61774064. Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
5,094.6
2022-12-01T00:00:00.000
[ "Engineering" ]
Artificial MiRNA Knockdown of Platelet Glycoprotein lbα: A Tool for Platelet Gene Silencing In recent years, candidate genes and proteins implicated in platelet function have been identified by various genomic approaches. To elucidate their exact role, we aimed to develop a method to apply miRNA interference in platelet progenitor cells by using GPIbα as a proof-of-concept target protein. After in silico and in vitro screening of siRNAs targeting GPIbα (siGPIBAs), we developed artificial miRNAs (miGPIBAs), which were tested in CHO cells stably expressing GPIb-IX complex and megakaryoblastic DAMI cells. Introduction of siGPIBAs in CHO GPIb-IX cells resulted in 44 to 75% and up to 80% knockdown of GPIbα expression using single or combined siRNAs, respectively. Conversion of siGPIBAs to miGPIBAs resulted in reduced silencing efficiency, which could however be circumvented by tandem integration of two hairpins targeting different regions of GPIBA mRNA where 72% GPIbα knockdown was achieved. CHO GPIb-IX cells transfected with the miGPIBA construct displayed a significant decrease in their ability to aggregate characterized by lower aggregate numbers and size compared to control CHO GPIb-IX cells. More importantly, we successfully silenced GPIbα in differentiating megakaryoblastic DAMI cells that exhibited morphological changes associated with actin organization. In conclusion, we here report the successful use of miRNA technology to silence a platelet protein in megakaryoblastic cells and demonstrate its usefulness in functional assays. Hence, we believe that artificial miRNAs are suitable tools to unravel the role of a protein of interest in stem cells, megakaryocytes and platelets, thereby expanding their application to novel fields of basic and translational research. Introduction Platelets play a pivotal role in thrombosis and haemostasis but also in inflammatory processes such as atherosclerosis or infectious diseases [1]. To further expand our understanding of platelets, several genomic, transcriptomic and proteomic studies have been performed leading to the identification of thousands of candidate genes for which the vast majority of them are of unknown function [2][3][4]. Gene silencing by RNA interference is a powerful approach to determine the function of a gene, however this cannot be applied directly to platelets as they are anucleated cells. Direct introduction of small interfering RNAs (siRNAs) in platelets is further hampered by low transfection efficiency and the high sensitivity of platelets to permeabilisation techniques, resulting in an altered physiology [5]. The marginal synthesis of proteins by platelets furthermore implies that a post-transcriptional technique such as RNA interference will only have limited success when applied directly [6]. The study of platelets in which expression of a protein is suppressed therefore requires stable genetic modification of either the megakaryocyte (progenitor of platelets) or hematopoietic stem and progenitor cells (HSPC), from which transgenic human platelets can be generated [7]. RNA interference can be achieved by introducing siRNAs directly into target cells or be produced by longer RNA precursors such as short hairpin RNAs (shRNAs) or micro RNAs (miR-NAs) [8]. Although shRNA molecules have frequently been used to knock down expression of a gene of interest in various cell types, a growing number of reports have shown cytotoxic effects and immune responses triggered by shRNAs [9][10][11][12]. In light of these reports, artificial miRNA sequences, in which the stem sequence of a natural miRNA has been replaced by a sequence targeting the gene of interest represent a superior tool for efficient gene knockdown [12,13]. In addition, as opposed to polymerase type III promoter driven shRNAs, miRNAs can be transcribed from polymerase type II promoters, which can allow targeting gene silencing to a particular cell type [12]. There are only few examples of the use of shRNA technology to genetically modify platelets via transduction of mouse or human HSPC, reviewed elsewhere [7], [10,[14][15][16]. The aim of our study is therefore to establish miRNA as a powerful tool to genetically modify platelets or megakaryocytic cell lines to use in platelet functional assays. As proof of principle, we developed a miRNA-expressing vector targeting GPIbα, the most functionally important subunit of the GPIb-V-IX complex. Absence or dysfunction of GPIb-V-IX results in the Bernard-Soulier Syndrome, a bleeding disorder characterised not only by impaired platelet adhesion, but also by macrothrombocytopenia, due to a disturbed link between the GPIb-V-IX complex and the underlying cytoskeleton during platelet and/or MK formation [17]. We here report the use of miRNA-expressing vectors generated by incorporation of in vitro validated siRNA duplexes into a human miRNA-30a (miR30) scaffold to successfully knockdown a platelet gene (GPIbα) in two cell line models. We demonstrate that cells transfected with miRNA vectors lose their ability to fully aggregate and display impaired actin cytoskeleton rearrangement. Determination of siRNA and miRNA mediated knockdown in CHO GPIb-IX cells CHO GPIb-IX cells at 90% confluence were transfected in T-25 flasks with 240 pmol siRNA oligos or 240 pmol Block IT fluorescent oligo (Life Technologies) to determine siRNA transfection efficiency or alternatively 18 μg pCMV-miGPIBA-eGFP or 18 μg pCMV-eGFP using Lipofecta-mine2000 (Life Technologies) according to the manufacturer's instructions. Transfection efficiencies for siRNA and miRNA mediated knockdown was assessed using Block IT fluorescent oligos (Life Technologies) and pCMV-eGFP plasmid, respectively. As previously shown [24][25][26], insertion of miRNA sequences targeted to a transgene in 5' or 3' of the ORF of a reporter gene resulted in a nearly complete loss of eGFP expression ( Figure A in S2 File). Cells were stained 48h post transfection using the anti-GPIbα monoclonal antibody (moAb) 6B4 and a goat anti-mouse-PE secondary Ab (Jackson Immunoresearch, West Grove, PA) [27]. After fixation, cells were analyzed on an EPICS XL-MCL Flow Cytometer (Beckman-Coulter, Fullerton, CA). A gate was set for viable CHO GPIb-IX cells as determined by propidium iodide staining (data not shown) where 10,000 events were collected. Analysis was done using Flowing software 2.5 version (http://www.flowingsoftware.com/). CHO GPIb-IX aggregation assay The CHO GPIb-IX cell aggregation assay was performed as previously described with minor modifications [28]. Cells grown at 90-95% confluence in a 12-well plate were transfected in Optimem (Invitrogen) using per well a mixture of 1.5 μg pDNA and 3.71 μg polyethyleneimine (Polysciences, Warrington, PA) resuspended in 150mM NaCl, which had been pre-incubated for 20 min prior to transfection. After 24h, transfection solution was replaced with culture medium for another 24h after which harvested cells (2x10 5 cells) were transferred to a 24-well plate and incubated with 2.5 μg/ml von Willebrand factor (VWF) (Haemate P, CSL Behring, King of Prussia, PA) and 1.4 mg/ml ristocetin (ABP, Surrey, UK). Cells were then placed on a rotary shaker for 20 min at 360rpm and analysed on an Eclipse TE-200 inverted fluorescence microscope (Nikon, Tokyo, Japan) coupled to an Orca R2 CCD camera (Hamamatsu Photonics, Hamamatsu, Japan). For each condition, 4 experiments were conducted for all of which 3 contiguous pictures were taken and analysed using HC Image software (Hamamatsu Photonics). In all experiments, cell aggregates were identified using HC Image software by excluding single cells, doublets and triplets, and validated manually to remove eventual false aggregates (e.g. dust, debris). The number of aggregates and the surface area covered by each aggregate were then calculated using HC Image software. DAMI immunolabelling For each condition, 1x10 6 DAMI cells were transfected with pCMV-miGPIBA-2+3-eGFP, pCMV-eGFP or with mock control differentiation medium using the Amaxa Nucleofector II and Cell Line Nucleofector Kit C (both Lonza, Basel, Switzerland) according to the manufacturer's instructions. Cells were cultured in differentiation medium for 48h, after which GPIbα was detected using flow cytometry as described above. In parallel, immediately after nucleofection 1x10 5 cells were transferred to a Lab-Tek chamber (Thermo Fisher Scientific, Waltham, MA) containing 1ml differentiation medium to examine the effects of GPIbα knockdown on cell morphology. After 48h, cells were fixed in 4% paraformaldehyde and stained with anti-GPIbα moAb 6B4 (20 μg/ml final) and a rabbit anti-mouse-FITC secondary Ab (30 μg/ml final) (Jackson Immunoresearch). The cytoskeleton was visualized by incubating cells with phalloidine-TRITC (7.5 μg/ml final) (Merck Millipore, Billerica, MA) and cells were mounted using ProLong Gold Antifade reagent containing DAPI (Life Technologies) for nuclear counterstaining. Cells were analysed using a Nikon C1 confocal laser scanning microscope (Nikon) equipped with a Plan Apo VC 60× 1.4 NA oil immersion objective lens. Pictures were captured sequentially to prevent bleaching using appropriate excitation and emission filters for each fluorophore (DAPI-405 nm Argon laser with 450/50 nm band pass filter; FITC-488 nm Helium-Neon laser with 525/50nm band pass filter and TRITC-561nm Helium-Neon laser with a 605/40nm band pass filter). Image analysis was performed using EZ-C1 software (Nikon). Maximal length and width of the cells were determined by drawing a straight line on the confocal images along the cell axis and another one perpendicular to it, respectively. The cell aspect ratio was calculated by dividing the maximal width by the maximal length of each cell (n = 3 experiments; 3 replicates per experiment). Statistical analysis All statistical analyses were performed using GraphPad Prism 5 (Graphpad Software, San Diego, CA). All data were analyzed by unpaired Student t test or one-way analysis of variance (ANOVA) followed by Dunnett's or Tukey's post-tests. Differences were considered statistically significant when à p 0.05, Ãà p 0.01 and ÃÃà p 0.001. Results In silico and in vitro siRNA selection and testing As described under Materials and Methods, three siRNA sequences targeting different regions of GPIBA mRNA were selected and their ability to mediate GPIbα knockdown was evaluated 48h after transfection of CHO GPIb-IX cells with the siRNA duplexes. Transfection efficiencies using Oligo Block iT were around 45-50% 48h post transfection and were not significantly different between the experimental days (S1 Fig). A significant knockdown could be detected in CHO GPIb-IX cells transfected with siGPIBA-2, siGPIBA-3 or in combination of the two: 65.4 ± 5.7%, 72.9 ± 8.5% and 74.7 ± 4.5%, respectively (Fig 1B, 1C, 1D and 1F; p<0.001). Although siGPIBA-1 significantly downregulated GPIbα expression, it was not as efficient as the other two siRNAs (44.3 ± 0.2% knockdown; p<0.05) (Fig 1A and 1F). As expected, using siGPIBA-1 in combination with siGPIBA-2 and siGPIBA-3 did not significantly further increase the knockdown of GPIbα expression (74.7 ± 4.5% versus 79.7 ± 1.5%) (Fig 1E and 1F). Based on these results, siGPIBA-2 and siGPIBA-3 were selected for the development of miRNAs. Silencing of GPIbα by miRNA vectors We developed miRNA vectors by omitting three nucleotides from siGPIBA-2 and siGPIBA-3 to generate miGPIBA-2 and miGPIBA-3 which were integrated in a human miR30 loop and flanking sequences, creating pre-miRNAs (pre-miGPIBA-2 and pre-miGPIBA-3) which were ultimately cloned into the pCMV-eGFP backbone in various configurations (S1 File, Table 1). When analyzing the transfected cells for GFP expression, a dramatic decrease in expression levels was observed in all CHO GPIb-IX cells transfected with miRNA vectors (Figure A in S2 File) as previously observed [23,25,26]. A loss of GFP expression was also observed when the miRNA sequence was placed at the C-terminal of the reporter gene. Indeed, CHO GPIb-IX cells transfected with pCMV-eGFP-miGPIBA-2 exhibited a dramatic decrease in GFP expression compared to CHO GPIb-IX cells transfected with pCMV-eGFP ( Figure A in S2 File). In light of these results, transfection efficiencies were determined using CHO GPIb-IX cells transfected with pCMV-eGFP in parallel with the GPIBA miRNA vectors and were routinely around 50% ( Figure B in S2 File). In order to improve knockdown of GPIbα, we combined multiple miRNAs in a single plasmid as this approach has been reported to be successful ( Figure B in S1 File) [26]. Remarkably, expression of GPIbα in CHO GPIb-IX cells transfected with the pCMV-miGPIBA-2+2-eGFP containing two identical miRNA sequences was not significantly decreased compared to control CHO GPIb-IX cells or cells transfected with a miRNA vector containing miGPIBA-2 or miGPIBA-3 (38.6 ± 17.5% knockdown; p>0.05; Fig 2C and 2E). However, when using a combination of two different miRNA sequences, a significantly higher knockdown of GPIbα expression could be achieved compared to either individual miRNAs (71.9 ± 6.6%; p 0.01) (Fig 2D and 2E). GPIbα downregulation reduces ristocetin-induced von Willebrand factor-GPIb dependent cell aggregation Since successful silencing of GPIbα should impair the interaction with its main ligand VWF, we performed cell aggregation assays with mock, pCMV-eGFP or pCMV-miGPIBA-2+3-eGFP transfected CHO GPIb-IX cells in the presence of VWF and ristocetin, which is needed to induce the interaction between GPIbα and VWF in this type of assay. Following rotary shaking, the number and size of aggregates formed were evaluated and was similar in pCMV-eGFP or mock transfected CHO GPIb-IX cells (50.6 ± 9.9 vs. 57.0 ± 10.8 aggregates and 15.2 ± 1.6 vs. 11.9 ± 2.1 arbitrary units (A.U.), respectively) (Fig 3B, 3C, 3E and 3F). As expected, no aggregate could be formed when VWF was omitted in the assay (Fig 3A). Transfection of CHO GPIb-IX cells with pCMV-miGPIBA-2+3-eGFP exhibited reduced number (27.1 ± 4.1; p<0.05) and size of the aggregates (5.1 ± 1.1 A.U., p 0.01) (Fig 3D-3F). Further analysis revealed that the average aggregate size was 32.7 ± 8.6% smaller in mean aggregate size, despite the fact that only 15.4 ± 1.3% of the cells were successfully transfected and thus lacking GPIbα expression. This prompted us to perform a control experiment in which CHO GPIb-IX cells were mixed with CHO β9 cells in a 85:15 ratio before performing the aggregation assay, thus mimicking the GPIBA miRNA transfection conditions. Under these conditions, a reduction in the number of aggregates (33.4 ± 1.6%) and mean aggregate size (27.0 ± 1.5%) was observed as compared to a population of 100% CHO GPIb-IX cells, thus validating our results observed in CHO GPIb-IX cells transfected with pCMV-miGPIBA-2+3-eGFP (S3 File). Down regulation of GPIbα expression in megakaryoblastic DAMI cells triggers reorganization of the actin network Another hallmark of GPIbα dysfunction or deficiency is abnormal megakaryopoiesis and the formation of giant platelets. Moreover, the validation of the miRNA constructs in human megakaryocytic or megakaryoblastic cell types is an important milestone towards the ultimate goal of achieving stable RNAi-mediated target gene knockdown in CD34 + HSPC. We therefore studied the effects of GPIbα deficiency during megakaryopoiesis by knocking down GPIbα expression in megakaryoblastic DAMI cells. A subset of the DAMI cell population expresses the MK-and platelet-markers GPIb-V-IX and αIIbβ3. Surface expression of both receptors can be upregulated by stimulating differentiation of the cells to a more mature phenotype [29]. In our hands, PMA-induced differentiation led to shape change and adhesion of cells concomitant with an increase in ploidy (from predominantly 2N to 4N and going up to 16N) and an upregulation of both GPIbα (from 4.1 ± 1.0% to 25.3 ± 8.1%; Fig 4D) and αIIbβ3 (from 11.9 ± 0.7% to 73.2 ± 7.0%; n = 3) (data not shown). To validate the use of miRNA vectors in MK/platelet lineage cells, DAMI cells were nucleofected with pCMV-miGPIBA-2+3-eGFP, and stimulated with PMA to induce GPIbα expression. After 2 days, 64.4 ± 6.5% of eGFPnucleofected DAMI cells expressed eGFP. In contrast to mock-or eGFP-nucleofected DAMI cells, no upregulation of GPIbα expression could be detected in pCMV-miGPIBA-2+3-eGFP nucleofected DAMI cells and GPIbα expression remained at baseline expression levels (2.6 ± 0.8%; p<0.05), attesting of a successful knockdown of GPIbα. In addition, overt differences in actin organization could be observed in GPIBA miRNA-transfected cells compared to mock-or eGFP-transfected cells, with GPIBA miRNA-transfected DAMI cells displaying a more stretched or elongated morphology (Fig 4A-4C). These morphological cell shape changes could be quantified by calculating the cell aspect ratio which gives an estimate of overall cell morphology. Cells lacking detectable GPIbα showed a significantly lower cell aspect ratio compared to mock-and eGFP-transfected cells (0.57 ± 0.03 vs. 0.79 ± 0.03 and 0.78 ± 0.03 respectively; p 0.001) (Fig 4E), confirming that miRNA technology can be successfully used to study the function of a candidate gene during megakaryopoiesis. Discussion Over the last decade, technological advances with genomics, transcriptomics or proteomics have revolutionised human genetic research including our knowledge in platelet biology. Nevertheless, it remains a challenge to assign function to thousands of candidate genes/proteins in platelets as direct molecular biology approaches cannot be applied due to their anucleated nature. We here report on the development of a tool using miRNA vectors to successfully knockdown a platelet protein GPIbα in a megakaryoblastic cell line that could be applied to HSPC in view to generate genetically modified human platelets. We started developing our strategy by first selecting different siRNAs based on multiple parameters defining an efficient siRNA capable of successfully downregulating a target gene. All selected siRNAs were successful at downregulating GPIbα expression in CHO GPIb-IX cells, albeit to a different extent, with siGPIBA-2 and miGPIBA-3 being the most powerful (Fig 1). These findings corroborate the general recommendation that despite extensive in silico screening procedures and the publication of numerous siRNA design guidelines, only actual in vitro or in vivo tests provides truly reliable information regarding the knockdown potential of a siRNA [20,21]. Although the use of siRNAs allows rapid screening for efficient target sequences, direct introduction of siRNA in platelets is currently hampered by a low transfection efficiency [5]. Moreover, de novo synthesis of proteins in anucleate platelets only occurs from a limited number of cytoplasmic mRNAs [30] and most platelet proteins are synthesized in the MK from which platelets originate [6]. Hence, the best method to obtain and study platelets or MK lacking expression of one or more proteins is by genetic modification of nucleated platelet progenitor cells. We therefore converted siRNAs to plasmid DNA-based miRNAs, which can be transferred to vectors capable of integrating in the host genome, thus resulting in stable and long term knockdown [31,32]. A report from Amendola et al. suggests that the use of a scaffold derived from an endogenous miRNA with abundant expression in the target cell type, results in a high silencing efficiency [31]. Therefore, we opted to integrate the siRNA sequences in human miR30 loop and flanking regions, since the presence of the endogenous human miR30 in human platelets has been confirmed using microarray technology [33]. We used a miRNA vector where miRNA sequences could be inserted in the 3' or 5' of a reporter gene (eGFP), however observed that no GFP fluorescence could be detected in either CHO GPIb-IX or DAMI cells transfected with miGPIBA constructs regardless of the 3' or 5' position of the reporter gene ( Figure A in S2 File). This is in agreement with the literature [25,26] where the hairpin loop structure of miR-NAs reduces the translation efficiency of the reporter gene. Indeed, since the initial phase of our study, this loss of reporter expression has been circumvented by the addition of a chimeric intron into the construct [24,25,31]. In addition, other scaffold for miRNA constructs such as miR155, miR223 or miR451 have been used as well, the latter showing improved Ago2 specificity and reducing risk of overwhelming the endogenous miRNA machinery [24,31,34]. Although in all single miRNA constructs tested the observed knockdown was much weaker compared to the corresponding siRNAs, we could reach similar efficacies by inserting additional miRNA hairpins, thereby providing a certain level of control over the knockdown obtained (Figs 1 and 2). This can be particularly useful when dealing with a highly expressed target protein, such as the GPIb-V-IX complex in MK and platelets, where silencing using a single miRNA might be ineffective because of the so-called dilution effect [35]. In order to increase GPIbα knockdown, we initially inserted two identical miRNA hairpins in one plasmid, a strategy successfully adopted in other cell types [26,31,36]. While no further reduction in GPIbα expression could be achieved using two identical miRNA sequences, a combination of two different miRNAs (pCMV-miGPIBA-2+3-eGFP) did result in increased GPIbα silencing (Fig 2). These results partially confirm those of Amendola et al. who showed that a tandem configuration of two completely identical miRNAs is unstable, in contrast to a configuration of two different miRNAs [31]. However, in our case both miRNAs have an identical human miR30 loop and flanking sequences and only differ in their targeting sequence, suggesting that different targeting sequences might be sufficient to circumvent structural tandem (artificial) miRNA instability. In order to validate miRNA technology for functional studies, we sought to mimic the effects of GPIbα deficiency on platelet aggregation by performing a CHO GPIb-IX cell aggregation assay which is dependent on the interaction between GPIbα and its main ligand VWF [18,28]. Transfection of CHO GPIb-IX cells with pCMV-miGPIBA-2+3-eGFP significantly reduced the number of aggregates formed (Fig 3E), as well as the aggregate size (Fig 3F), illustrating that miRNA technology can be successfully used for functional studies at the receptorligand level. To further demonstrate the potential of the miRNA-based approach for gene silencing during megakaryopoiesis, the miGPIBA-2+3 cassettes were introduced into megakaryoblastic DAMI cells to evaluate the effects of downregulating GPIbα on actin reorganisation during PMA-induced differentiation. When GPIbα upregulation was blocked in differentiating DAMI cells, we observed a different actin distribution with an elongated cell morphology represented by a significantly reduced cell aspect ratio compared to mock-and eGFP-transfected control conditions. This is in line with the role of GPIbα in actin reorganisation via the binding of its intracellular tail domain to actin-binding proteins such as Filamin [37,38]. Indeed, both mice lacking Filamin A or GPIbα are unable to anchor the plasma membrane to the cytoskeleton and have enlarged platelets [39][40][41]. In conclusion, we show that our miRNA-based approach is a powerful tool to successfully silence a platelet gene and can also be used for functional studies. During the preparation of this manuscript, few examples using miRNA backbone vectors have been applied to silence a platelet gene in human HSPC with success [38,42]. We therefore believe that this miRNA strategy could be of great use in the characterisation of recently discovered platelet genes with unknown function, thereby identifying potential new targets for the development of novel antithrombotics but also for other applications such as engineering HLA-universal platelets [16]. , pCMV-eGFP-miG-PIBA-2 (hatched green), pCMV-miGPIBA-2-eGFP (green), pCMV-miGPIBA-2+2-eGFP (orange), or pCMV-miGPIBA-2+3-eGFP (purple). Note high expression for pCMV-eGFP transfected cells and loss of GFP expression for cells transfected with miRNA constructs. Statistical analysis was performed using Anova followed by Dunnett's post-test ( Ãà p<0.01). (B) Transfection efficiencies assessed by % of CHO GPIb-IX cells expressing GFP ± SEM (n>3), 48h post transfection with pCMV-eGFP carried out in parallel with pCMV-eGFP-miGPIBA-2 (hatched green), pCMV-miGPIBA-2-eGFP (green), pCMV-miGPIBA-2+2-eGFP (orange), or pCMV-miGPIBA-2+3-eGFP (purple) transfections. Note that the transfection efficiencies are not significantly different between the groups. Statistical analysis was performed using Anova followed by Tukey's post-test (p>0.05). (TIF) S3 File. Ristocetin induced VWF-dependent cell aggregation. CHO GPIb-IX cells were incubated with ristocetin and VWF on a rotary shaker to induce aggregate formation. (A-C) Representative pictures from 100% CHO GPIb-IX cells (A), 100% CHO β9 cells (B) and a mixture of 85% CHO GPIb-IX cells and 15% CHO β9 cells (C) are shown. Scale bar is 100μm. Quantitative analysis was performed by measuring the number of aggregates (D) and the aggregate size (a.u.: arbitrary units) (E). Data represent mean ± SEM (n = 6). Statistical analysis was performed using the unpaired Student t test ( ÃÃà p<0.001). (TIF) (Molecular Virology and Gene Therapy, KU Leuven, Belgium) for their help with the design and cloning of the human miR30 miRNAs in the pCMV-eGFP vector.
5,356.4
2015-07-15T00:00:00.000
[ "Biology" ]
CRISPR-based genome editing of a diurnal rodent, Nile grass rat (Arvicanthis niloticus) Background Diurnal and nocturnal mammals have evolved distinct pathways to optimize survival for their chronotype-specific lifestyles. Conventional rodent models, being nocturnal, may not sufficiently recapitulate the biology of diurnal humans in health and disease. Although diurnal rodents are potentially advantageous for translational research, until recently, they have not been genetically tractable. The present study aims to address this major limitation by developing experimental procedures necessary for genome editing in a well-established diurnal rodent model, the Nile grass rat (Arvicanthis niloticus). Results A superovulation protocol was established, which yielded nearly 30 eggs per female grass rat. Fertilized eggs were cultured in a modified rat 1-cell embryo culture medium (mR1ECM), in which grass rat embryos developed from the 1-cell stage into blastocysts. A CRISPR-based approach was then used for gene editing in vivo and in vitro, targeting Retinoic acid-induced 1 (Rai1), the causal gene for Smith-Magenis Syndrome, a neurodevelopmental disorder. The CRISPR reagents were delivered in vivo by electroporation using an improved Genome-editing via Oviductal Nucleic Acids Delivery (i-GONAD) method. The in vivo approach produced several edited founder grass rats with Rai1 null mutations, which showed stable transmission of the targeted allele to the next generation. CRISPR reagents were also microinjected into 2-cell embryos in vitro. Large deletion of the Rai1 gene was confirmed in 70% of the embryos injected, demonstrating high-efficiency genome editing in vitro. Conclusion We have established a set of methods that enabled the first successful CRISPR-based genome editing in Nile grass rats. The methods developed will guide future genome editing of this and other diurnal rodent species, which will promote greater utility of these models in basic and translational research. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-024-01943-9. Background Model organisms are essential for biomedical research in understanding physiology and pathology relevant to human health and disease.Commonly used animal models in biomedical research including laboratory mice or rats are nocturnal (night-active), while humans are diurnal (day-active).Diurnal and nocturnal mammals have acquired different adaptations through the evolution of numerous pathways to optimize survival for a day-or night-active lifestyle [1].An internal time-keeping system, namely the circadian clock system, has evolved to predict and prepare animals for the daily fluctuations in their environment.Circadian systems coordinate the temporal organizations of molecular, cellular, and physiological processes across the body to ensure the functions of cells, tissues, and organs are synchronized with the environmental day-night cycle [2].In mammals, this system is organized in a hierarchical manner, with the principal brain clock within the hypothalamic suprachiasmatic nucleus (SCN), coordinating the circadian rhythms of subordinate clocks in other brain regions and in peripheral tissues and organs [3].The expression of core clock genes within the SCN shows the same temporal dynamics, i.e., peaking at the same time in diurnal and nocturnal animals; however, other brain regions and peripheral organs show complex differences between the two chronotypes.Large-scale transcriptomic studies revealed that the shared rhythmic genes' peak expression shifted by 6-15 h between nocturnal (mouse) and diurnal (baboon) species depending on tissue types [4,5].Therefore, the circadian system in nocturnal and diurnal species differs in a more complex manner than a simply inverted daily pattern [1], which likely involves distinct wiring of neural circuits and gene-regulatory networks [4][5][6][7].Furthermore, evidence suggests that experimentation during nocturnal rodents' inactive phase can be a major cause of human clinical trial failures of drug candidates proven to be effective in preclinical mouse models [8].For these reasons, diurnal rodents are advantageous over nocturnal ones for translational research [9].A major limitation of diurnal rodents in biomedical research is that they have not been genetically tractable.The present study aimed to overcome this barrier and develop methods for gene editing in a diurnal rodent, the Nile grass rat (Arvicanthis niloticus). Nile grass rats, together with laboratory mice (Mus musculus) and laboratory rats (Rattus norvegicus), are members of the family Muridae [10], and these species are likely to have diverged from a common ancestor relatively recently [1].Nile grass rats, like mice, attain reproductive maturity rapidly, have a 24-day gestation period and mate on postpartum estrus, which makes maintenance of a colony relatively simple [11].Nile grass rats are clearly diurnal both in nature and in the laboratory, as indicated by their patterns of activity, sleep, mating behavior, body temperature, and secretion of luteinizing hormones [1].Their retinal anatomy and retinorecipient brain regions are also typical for animals active during the daytime [12][13][14].The Nile grass rat colony at Michigan State University was established in 1993 from a cohort of animals captured from the Maasai Mara National Reserve in Kenya [15].The colony has been maintained since then, and animals derived from this colony have been shared with numerous research groups that investigate circadian rhythms and sleep, affective behaviors, cognitive function, immune function, metabolic syndromes, ophthalmology, and evolutionary biology.Despite being a well-established diurnal rodent model, Nile grass rats have not been genetically tractable because their complete genome sequence and an established genome editing protocol have not been available.Recently, the Vertebrate Genome Project [16,17] released the initial build of the Nile grass rat genome [18], opening up an opportunity for genome editing in this species. In the present work, taking advantage of CRISPR-Cas9 and i-GONAD, we developed a method for genome editing of the Nile grass rat.To our knowledge, this study demonstrates genome editing of this well-established diurnal rodent model for the first time.The first targeted gene in this species is the Retinoic acid-induced 1 (Rai1) gene, whose haploinsufficiency is responsible for Smith-Magenis Syndrome (SMS), a neurodevelopmental disorder [45].We also succeeded in several critical steps essential for gene targeting, including superovulation and embryo culture, which will allow for direct in vitro embryonic manipulation (microinjection and electroporation) of a variety of genome editing reagents beyond CRISPR-Cas9, thereby paving the way for future efforts to equip this diurnal model with a variety of molecular and genetic tools currently available for conventional laboratory mice or rats. Superovulation of Nile grass rats In order to produce a high number of fertilized grass rat embryos for genome editing, we attempted to establish a superovulation protocol by varying the timing of hormone treatment and egg collection as outlined below.The egg yield and fertilization rate were then compared to those from a natural mating cohort. Timing of hormone treatment and egg collection Due to the lack of knowledge about the reproductive biology of this species, we designed superovulation protocols based on observations from grass rat breeding and standard superovulation protocols in mouse and rat.Our colony breeding records showed that a notable number of first litters were born between 26 and 30 days after males and females were paired, indicating that day 3 and day 4 post pairing is likely the early receptive mating window.Therefore, superovulation protocols were set to administer the hormone human chorionic gonadotropin (hCG) on day 3 or 4 after priming using pregnant mare's serum gonadotrophin (PMSG).Embryo yields from females that underwent superovulation with PMSG and hCG were compared to those from unassisted natural mating.All animals were housed in daily 12:12 h light/dark cycle, with lights on at 6:00 am.Collectively 6 out of 8 groups (Table 1, group # 3-8) of superovulated females produced 20 eggs per female on average (mean ± SEM: 20.8 ± 2.2), significantly higher than the number of eggs (5 ± 0.9) produced by natural mating (t-test, t 31 = 4.88, p < 0.001).In those 6 groups (#3-8), PMSG was administered between 6:00 am and 11:00 am (day 1), hCG was administered 48 to 57 h later between 2:00 pm and 4:00 pm (day 3), and eggs were collected on day 4 at 9:00 am or 5:00 pm (19 to 27 h post-hCG, group #4, 5, 8) to time the embryo development at pronuclear stage, or day 5 at 9:00 am, 10:00 am, or 2:00 pm (40 to 51 h post-hCG, group #3, 6, 7) with the aim to obtain 2-cell staged embryos.A shorter PMSG-hCG interval (36 to 48 h) and an earlier egg collection (on day 3) was tested in groups #1 and #2 which resulted in a lower yield of eggs (4.6 ± 1.1), comparable to that from the natural mating group (t-test, t 18 = 0.32, p = 0.75).While the dosage of PMSG and hCG was kept constant (15 IU 1).In summary, superovulation of grass rat females can be achieved, and the current protocol is sufficient to produce a large number of oocytes. Fertilization rate Although the number of eggs produced in the hormone-treated groups was significantly higher than in the natural mating group, the rate of females that underwent copulation was unexpectedly low.Only 7 out of the 22 females were sperm positive in the superovulation group, while 11 out of 17 females in the natural mating group were sperm positive as determined by a vaginal plug or smear (Additional file 1: Fig. S1).Thus, the fertilization rate in the hormone-treated groups was significantly lower than in the natural mating group (37 ± 15.2% vs 89.5 ± 5.8%; t-test, t 14 = 3.83, p < 0.01).These results indicate that further optimization is required to outperform the natural mating procedure in producing zygotes or embryos for in vitro genome editing manipulation. Male presence during superovulation To facilitate the receptivity of females after superovulation, males were introduced into female cages right after PMSG injection in some groups (#6, 7, 8), to allow females to become familiarized with the males.However, the timing of pairing during superovulation seemed to have no significant effect on the number of total or fertilized eggs between groups (t-test, t 20 = 0.9, p = 0.38). Early development and in vitro culture of Nile grass rat embryos To understand the time course of early embryo development and to determine permissive conditions and timing for embryo microinjection, eggs collected from three cohorts of superovulated females were cultured in vitro. None of the eggs collected from group #4 (19 h post-hCG, Table 1) showed signs of fertilization at the time of harvesting; after 8 h of culture in M2 medium, several zygotes developed pronuclei, but did not develop further (Additional file 2: Fig. S2). Previous studies have reported that in suboptimal culture conditions, early embryo development of mouse, hamster, and rat, arrests at the 1-or 2-cell stage, referred to as the "2-cell block" [46][47][48][49], which could be overcome with different concentrations of nutrients and culture media [50][51][52][53].To bypass a potential 2-cell block in this species, 2-cell stage embryos (n = 11) collected on day 5 (52 h post-hCG, group #3, Table 1) were cultured in M2 medium on a 37 °C heat stage, in air for 5 h.These culture conditions supported some embryos reaching the 4-cell stage.The embryos were then transferred either into Sydney IVF Fertilization Medium (SIFM) mouse embryo culture medium or modified rat 1-cell embryo culture medium (mR1ECM) with PVA, while a few were left in M2 medium.After 2 to 3 days of incubation at 37 °C, 5% CO 2 , blastocysts were observed in both SIFM and mR1ECM media, but not in M2 medium (Fig. 1).Out of 5 embryos from each group, 4 developed into blastocysts in mR1ECM, while only 1 developed into blastocyst in Fig. 1 Comparison of two standard mouse and rat embryo media for grass rat embryos in vitro culture.Nile grass rat 2-cell stage embryos were flushed on the morning of day 5, 52 h post hCG injection from oviducts of sperm-positive females.Embryos were washed in M2 medium and then cultured in mouse embryo culture medium SIFM, mouse embryo handling medium M2, or rat embryo culture medium mR1ECM-PVA.During in vitro culture, embryos were cultured in 20 µL medium micro-drops under mineral oil, in a 5% CO 2 incubator at 37 °C SIFM; moreover, blastocysts appeared to be bigger with more cells in mR1ECM medium. We further tested mR1ECM medium on 1-cell stage embryos to determine if a 2-cell block occurred.At the time of embryo collection from group #5 (27 h post-hCG, Table 1), 4 out of 13 embryos appeared to have pronuclei; then, embryos were divided into 2 groups and cultured in mR1ECM media supplemented either with PVA or BSA.After 21 h of culture, or 48 h post-hCG, 5 out of 6 embryos in mR1ECM-PVA and 5 out of 7 in mR1ECM-BSA reached the 2-cell stage.Blastocysts started to appear at 96 h post-hCG first in mR1ECM-BSA, and subsequently in mR1ECM-PVA (Fig. 2).Together, these results demonstrate that mR1ECM media can support grass rat embryos to develop into blastocysts from the 1-cell stage in vitro and bypass a potential 2-cell block.The number of pronuclei, 4 out of 13 at 27 h post-hCG, and the number of 2 cells, 10 out of 13 at 48 h post-hCG, collectively indicated that most of the pronuclei developed 27-h post-hCG, which would coincide with the night on day 4 post-pairing.In natural mating experiments, embryos were observed at pronuclear stage when they were harvest at midnight on the day of mating, then turning into 2-cell embryos in the next morning.This finding suggests that the ideal time window for manipulating grass rat embryos at the pronuclei stage is likely around midnight on mating day.However, to avoid disturbing the animals in the middle of their inactive/sleep phase, in subsequent in vitro experiment, microinjection was performed in next morning into 2-cell embryos. Rai1 gRNA validation and generation of Rai1 KO Nile grass rat via i-GONAD Gurumurthy et al. developed GONAD and improved-GONAD (i-GONAD), in which CRISPR components are injected into the oviduct harboring fertilized eggs followed by electroporation allowing the delivery of CRISPR reagents into zygotes.i-GONAD does not require either embryo manipulation in vitro, or surrogate pseudopregnant females [39,40].The lack of established methods for production of pseudopregnant grass rat surrogates, pointed to i-GONAD as a viable approach to generate genome-edited Nile grass rats. The gene we targeted was Rai1, encoding a histonebinding protein.Rai1 haploinsufficiency is responsible for Smith-Magenis Syndrome (SMS), a rare neurodevelopmental disorder characterized by obesity, autistic behavior, and circadian rhythm and sleep disturbances [45,54].Although the obesity and some behavioral traits have been recapitulated in Rai1-knockout mice, Rai1 +/− mice "clearly differ from SMS patients" regarding their sleep and circadian rhythms [55].Contrary to the daytime sleepiness seen in SMS patients, the total timespent-awake in Rai1 +/− mice was comparable to wild type (WT) during their active phase.On the other hand, SMS patients also experience frequent night awakening, while Rai1 +/− mice slept significantly more than WT during their resting phase.These observations raise a possibility that the inverted chronotype contributes to the lack of sleep phenotypes in the SMS mouse model.Thus, Rai1 is an ideal gene to test the utility of Nile grass rats for human disease modeling. Two guide RNAs (gRNAs), g169 and g170, were designed to delete 2035 bp of exon3, encoding most of the Rai1 protein (Fig. 3A).In vivo targeting efficiency of the gRNAs was validated in a female from natural mating that underwent i-GONAD.Within an hour following the i-GONAD procedure, three zygotes were retrieved from the oviduct and cultured in an incubator to the morula/ blastocyst stage.After 4 days of culture in mR1ECM-PVA Fig. 2 Nile grass rat embryos in vitro culture from 1-cell stage.Grass rat eggs were flushed on the afternoon of day 4, 27 h post hCG injection from oviducts of sperm positive females.Eggs or zygotes were washed and then cultured in mR1ECM with PVA or with BSA, in a 5% CO 2 incubator at 37 °C until harvest medium, embryos were collected and lysed individually for analysis by PCR amplifying both long-range and short-range amplicons of the target sites for g169 and g170.PCR analysis revealed that 1 out of the 3 embryos carried a large deletion between the cut sites of the two gRNAs (Fig. 3B, C).Subsequent sequencing data revealed that the embryo also carried an allele with indels at the cut sites of both g169 and g170, while another embryo carried a 6 bp mutation around g169 (Fig. 3D & E), indicating that 2 out of 3 embryos were successfully edited. Once the editing efficiency of the g169 and g170 was confirmed, multiple cohorts of female grass rats, either superovulated with PMSG/hCG or following natural mating, underwent i-GONAD in attempts to generate Rai1 knockout (KO) grass rats (Table 2).For the hormonetreated animals, PMSG was administered on day 1 at 6 am, hCG on day 3 at 2:00 pm, and i-GONAD was performed on day 4 at ~ 5:00 pm following the confirmation of successful mating in the morning.For natural mating pairs, a vaginal smear was checked to confirm sperm presence in the morning of day 4 to day 5 post-pairing, and i-GONAD was performed in the afternoon. Initially, females were singly housed after i-GONAD, but neither the PMGS/hCG primed (n = 10) nor those from natural mating pairs (n = 8) gave birth to any live pups (Table 2, group #1 & #2).Since previous studies of hamsters and voles have suggested that male proximity contributes to pregnancy success [56][57][58][59], in subsequent cohorts, the male was not removed from the cage after the i-GONAD procedure and was co-housed with the female for at least 10 days to facilitate pregnancy maintenance.In the cohort of hormone-treated females (Table 2, group #3), 2 out of 13 were sperm positive as detected by vaginal smear and underwent i-GONAD.One of them produced a litter of 5 offspring.Although the litter died a few days postnatally, one of the pups was From the first several litters, only 2 live pups out of 43 born carried Rai1 large deletions (Fig. 4A).Subsequently, four females underwent i-GONAD and produced 2 litters of 9 pups in total (Table 2, group #6), of which 4 pups carried multiple large deletions (Fig. 4B).Sanger sequencing revealed various deletions across litters (Fig. 4B).The deletion events appeared to occur mostly heterozygously or exhibit mosaicism, except for animals iG5-iG8, which did not show WT bands (Fig. 4A, B).We reasoned that the absence of WT band may be due to inefficient PCR amplification of the larger WT amplicon, because another primer set amplifying smaller WT amplicon provided signal from animals iG6 and iG7 (Fig. 4C lower panel).In sum, analysis of founder (G0) offspring revealed successfully edited Rai1 KO animals following i-GONAD delivery of CRISPR reagents. After breeding with WT animals, Rai1 KO founders iG6 and iG7 successfully transmitted the edited Rai1 allele to their G1 offspring (Fig. 4C).Founder iG6 transmitted 2 in-frame deletions to G1: a 2190-bp deletion removing amino acids G9-P738 (Fig. 4D), and a 741-bp deletion of amino acids E466-P712.In a subsequent litter from iG6, a frameshift deletion of 962 bp was also transmitted to G1 (Additional file 3: Fig. S3).Founder iG7 transmitted 2 frameshift deletions, 874 bp (Fig. 4E) and 665 bp were detected in G1 animals iG7-1B and iG7-1C (Fig. 4F).Multiple bands were detected in individual founders, indicative of mosaicism-the presence of multiple alleles in the same animal, whereas only single altered DNA species were detected in G1 animals (Fig. 4C, D).These results demonstrate successful gene targeting of Rai1 in Nile grass rats and stable transmission of the targeted allele to the next generation. In vitro Nile grass rat embryo microinjection i-GONAD is suitable for delivery of CRISPR RNPs, mRNA, gRNA, and single-stranded oligodeoxynucleotide (ssODN).However, the delivery of large DNA molecules harboring transgenes requires microinjection directly into the zygote despite emerging reports about introducing large DNAs in vivo by adeno-associated virus (AAV) [60,61].Thus, delivering genetic material via microinjection is a key step towards sophisticated genetic manipulations such as conditional gene targeting.To avoid disturbing animals at midnight when embryos are at pronuclear stage, we tested microinjection of Rai1 CRISPR reagents into 2-cell embryos during daytime. Seven 2-cell and three 4-cell stage embryos were collected on day 5 following natural mating.HEPES buffered mR1ECM-BSA medium was used for embryo collection and microinjection of ribonucleoprotein (RNP), while mR1ECM-PVA was used for embryo culture postmicroinjection.During the microinjection process, none of the embryos showed any sign of cytotoxicity or morphological changes.Following 3 days of culture in mR1ECM-PVA medium, 9 out of 10 embryos developed into 5 blastocysts and 4 morulae.PCR and sequencing of the amplicon spanning the region between the two Rai1 gRNAs revealed that 7 out of 9 embryos carried large deletions of the Rai1 gene (Fig. 5).Therefore, CRISPR RNP microinjection into grass rat embryos can result in high-efficiency genome editing in vitro. Discussion CRISPR-Cas-based genome editing is not only used routinely in creating standard laboratory rodent models like mice and rats, but has also been used in engineering of non-conventional rodent models, including prairie voles [62,63] and hamsters [64,65], as well as livestock [30,31,34,66].However, genome editing has faced some unique challenges in diurnal rodents.Previous attempts to generate a germline transgenic line using a closely related diurnal rodent, the Sudanian grass rat (Arvicanthis ansorgei), reported repeated failures, likely due to "the lack of knowledge of experimental procedures suitable for creating transgenic diurnal rodents" [67].The present study serves as the first step toward developing the diurnal Nile grass rat as a genetically tractable model for translational research.Through this effort, we have achieved a few major milestones, from establishing conditions for superovulation, fertilization, embryo culture, and manipulation, to successfully producing founder animals carrying targeted deletions that were then transmitted to G1 offspring.An effective protocol to generate enough embryos is critical for successful genome editing.Extensive reproductive biology research has established superovulation, in vitro culture, and in vivo fertilization protocols in rodent species including mice, rats, hamsters [68], and prairie voles [69], laying the foundation for successful genome editing in those species, but such reproductive and early embryo development studies are lacking for the Nile grass rat or other diurnal rodents.The results from the current study contribute a working protocol that can effectively produce a large number of eggs in grass rats.Although the rate of females showing signs of successful mating was lower in the superovulated group than in the natural mating group, the ability to produce more eggs will be useful for in vitro fertilization approaches which are advantageous for reducing the number of egg donors to be euthanized while obtaining large number of embryos [70,71].Thus, this technique could be used to assist future genome editing in grass rats or other diurnal rodents. Conditions that support early embryo development in vitro enable embryo manipulation required for delivery of genome editing reagents such as microinjection and electroporation as well as the study of early embryogenesis.Through our superovulation studies, we were able to get a glimpse of the early embryo development timeline in the grass rats.Currently, there are no reports of in vitro handling of grass rat embryos or the timing and staging of early grass rat development.Based on the stage of embryos collected at different intervals from 19 to 52 h post hCG injection, the time course of the early grass rat development could be mapped out as follows: fertilization completes < 19 h; pronuclei form ≥ 27 h; then 2 cells form > 40 h.We found that both M2 and mR1ECM-HEPES could be used as shortterm embryo handling medium, while mR1ECM-PVA and mR1ECM-BSA both support grass rat embryos to develop from the pronuclear to blastocyst stage in culture in vitro.These results suggest that media optimized for rats might be suitable for the Nile grass rat for further studies such as in vitro fertilization or embryo or sperm cryopreservation. Understanding the timeline of natural mating, from breeding pair setup, vaginal plug and sperm detection to embryo harvesting and early embryo development, ultimately enabled us to perform genome editing in this species without superovulation.Furthermore, our initial attempts of i-GONAD with 18 females, which failed to produce any pups, led us to discover another unique feature for the reproductive success in this species-the requirement of male presence in order to carry pregnancies to term.While it is standard practice to single house mice or rats following embryo implantation or i-GONAD procedures, male proximity appears to be critical for successful pregnancy in grass rats.Similarly, it has been reported that for prairie voles continued male presence facilitates pregnancy maintenance [72].Hence, the final piece of the puzzle was in place for targeting grass rats with the outcome of 5 founders surviving to adulthood.Two of the Rai1 knockout founders transmitted their deletion to G1 offspring after backcrossing with wild-type animals, demonstrating that the targeted alleles could be established as stable genomeedited grass rat lines for future functional studies. To facilitate future genome targeting in this species, we propose a scheme for Nile grass rat genome targeting, either through natural mating or via superovulation (Fig. 6).Briefly, if the day of pairing females with males Fig. 6 A proposed scheme for Nile grass rat genome targeting.Females ranging from 3 to 11 months old can be used as egg donors or embryo transfer recipients, while proven breeder males are needed for mating.Procedure for natural mating: (1) The day that males are paired with females is defined as day 1 (D1).( 2) Vaginal smear cytology is assessed in the late afternoon of day 4 (D4) to determine if i-GONAD could be performed.At that timepoint, zygotes were found to be either at or prior to the early pronuclear stage, so in vitro manipulation on the afternoon of day 4 is not recommended.(3) Vaginal smear cytology is assessed on the morning of day 5 (D5), to determine if embryo manipulation can be performed on that day, including i-GONAD, in vitro embryo electroporation, or microinjection followed by embryo transfer into surrogate females, or embryo culture in vitro for gRNA validation.(4) Females are co-housed with males at least until day 10 (D10).( 5) Females are monitored for signs of labor starting from day 26 through day 30.( 6) Any pups born are genotyped upon weaning in natural mating, or the day of PMSG administration is defined as day 1 (D1), females with a sperm positive vaginal smear on day 4 (D4) will be suitable for embryo targeting on day 5 (D5).Late evening of D4 or daytime of D5 is the embryo manipulation window for pronuclear or 2-cell staged embryos.Co-housing a female that underwent targeting with a male until at least day 10 (D10) is critical for the maintenance of a successful pregnancy.It should be noted that in the present study, the i-GONAD method was used for generating KO grass rats, CRISPR-mediated knock-in (KI) via i-GONAD has not been tested in grass rats.DNA template delivery is a critical step for generating KIs.While it is possible to deliver short DNA template via i-GONAD, longer DNA template required for larger KI will likely need be delivered by other approaches, such as microinjection or AAV-mediated DNA delivery.The present work has identified the developmental time window from pronuclear to 2-cell staged embryos in grass rats.If G2 phase is longer in 2-cell stage embryos than in 1-cell embryo stage in grass rat, as in mice, 2-cell microinjection will be suitable for KI targeting in this species.The high editing efficiency of microinjected of 2-cell embryos with Rai1 RNPs is encouraging in that this method could be used to generate KI Nile grass rats in our future work. Conclusions In the present study, fundamental steps were taken towards creating genome edited diurnal rodent models.The newly created Rai1 KO Nile grass rat line using i-GONAD is a unique model for understanding the role of Rai1 in the neurodevelopmental disorder SMS.More importantly, the high targeting rate of 2-cell embryo microinjection demonstrated its potential for other forms of gene editing, including the generation of point mutations, knocking in epitope tags and larger insertions, and creating conditional alleles with the Cre-loxP system.We hope this method will help guide future development of genetically modified grass rats and other diurnal rodents, which will promote greater utility of these models in basic and translational research. Animals Adult male and female Nile grass rats were obtained from an in-house laboratory colony at Michigan State University [15].The colony was maintained in standard animal housing room under a 12:12 h light/dark (LD) cycle, with lights on at 6:00 and lights off at 18:00.A metal hut was provided in each cage for shelter and enrichment.Food (Prolab 2000 #5P06, PMI Nutrition LLC, MO, USA) and water were available ad libitum.All procedures were conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals (NIH Publication No. 80-23) and were approved by the Institutional Animal Care and Use Committee of Michigan State University. Superovulation and natural mating As a species of induced ovulators, female grass rats only start the ovulatory cycle after being co-housed with a male.Animals reach reproductive maturity around 2 months of age and can reproduce up to 16 months of age based on our observation of colony animals.Young to middle-aged adults (3-to 7-month-old) were used in this experiment based on the availability of animals in the colony.For superovulation experiments, singly housed females were first injected with PMSG (15 IU or 20 IU, BioVendor) in the morning (time of injection and dosage are listed in Table 2), followed by hCG (CHORULON ® , MERCK) at various intervals ranging from 36 to 57 h, at the same dosage as PMSG.A male was introduced to each hormone-treated female to allow mating, either following hCG administration or immediately after injection of PMSG. For natural mating, females were paired with males at the ratio of 1:1.In the initial experiments, a vaginal smear from each female under mating was checked daily, until a plug or sperm was found.In later experiments, vaginal smear and sperm presence were checked from day 4 to day 5 post pairing. Embryo collection and culture Females that had successfully mated, as confirmed by a vaginal plug or sperm positive smear, were euthanized with sodium pentobarbital (i.p. 150 mg/kg).Bilateral oviducts were dissected out and placed in HEPES-buffered embryo culture medium either 27-or 56-h post hCG injection.Embryos were released either by tearing the ampulla at the day of plug, or oviduct flushing the next day after plug or presence of sperm.After washing in HEPES-buffered M2 medium (Sigma) or mR1ECM-BSA (CytoSpring LLC), embryos were cultured in either M2 medium (Sigma), mouse embryo culture medium SIFM (COOK Medical), or rat embryo culture media mR1ECM supplemented with PVA or BSA (CytoSpring LLC). In vivo genome editing using i-GONAD The i-GONAD procedure was performed as described previously [41].Briefly, in the afternoon at the day of plug or sperm found, females were placed under isoflurane anesthesia, oviducts were exposed as in a standard embryo transfer procedure, after the RNP mixture was delivered into oviducts with a glass pipette, 4 pulses of 50 V were delivered using a pair of disk electrodes connected to the electroporator, Genome Editor (BEX CO., LTD).After i-GONAD surgeries were completed, females were placed back into their home cage, with or without a male, and were monitored daily. In vitro genome editing Genome editing was conducted in vitro via microinjection.RNPs were diluted with 10 mM Tris pH 7.5 buffer to a final concentration of 50 ng/µL each RNP.Embryo donor females were euthanized the next morning after vaginal sperm presence was confirmed.Embryos were collected and placed in HEPES-buffered mR1ECM-BSA culture medium, 2-cell nuclear microinjection was performed using CELLectro (a gift from Dr. Leyi Li, Cold Spring Harbor Lab), as previously described [73,74].Microinjected embryos were cultured in an incubator (5% CO 2 ; 37 °C), until later stages.Morulae or blastocysts were collected and genotyped individually to confirm CRISPR editing efficiency in vitro using PCR. PCR genotyping A small number of targeted embryos were genotyped to evaluate the editing efficiency of gRNAs.Briefly, embryos cultured in vitro were harvested when they were at blastocyst or morula stage, after 4 days in culture.Each embryo was placed individually into a PCR tube containing 10 µL of tail lysis buffer (0.1 mg/mL Proteinase K in 0.5% Triton X-100, 10 mM Tris pH8.5).Before they could be used as genomic DNA templates in PCR reaction, the embryo lysis would go through two steps: digestion at 56 °C for at least 1 h and heat treatment at 85 °C for 15 min. Pups born from females that went through i-GONAD or embryo transfer were genotyped using standard procedures.In brief, small ear biopsies were lysed at 56 °C overnight for PCR using tail lysis buffer described above, then heat treated at 85 °C for 15 min.Sanger sequencing (Azenta Us Inc., and Quintara Biosciences) was performed on purified PCR amplicon DNA.Primers used for sequencing and genotyping are listed in Table 3. Fig. 3 Fig. 3 Targeting of Rai1 in the Nile grass rat.A A locus map denoting the location of gRNA cutting sites (dashed lines, PAM -underlined), primers (black arrows) relative to exons of the Nile grass rat Rai1 gene.The locations of two predicted protein-interacting domains: a nucleosome-binding domain (NBD) and an extended plant homeo-domain (ePHD) are mapped to the corresponding coding region.B PCR of the 2 target regions for blastocysts that underwent i-GONAD: no obvious difference between targeted sample and wild-type.C Long-range PCR spanning both target regions demonstrates that 1 out of 3 blastocysts carries a large deletion.D, E Alignments with reference genome demonstrate the presence of indel mutations around g169 and g170 target sites, not detectable by molecular weight differences in PCRs shown in B Fig. 4 Fig. 4 Generation of Rai1-edited Nile grass rat founders and G1 offspring via i-GONAD.A Gel image of long-range PCRs of 13 pups (P1-P13) from the first 4 i-GONAD litters.Founder P12 (iG1) carries a large deletion.B Gel image of long-range PCRs from a representative litter (pups P36-P42), which produced 4 large deletions out of 7 pups.C PCRs demonstrating that multiple deletions from 2 founders, iG6 and iG7 were transmitted to G1 offspring.D Sanger sequencing chromatogram of G1 animal iG6-1C from founder iG6 shows a 2190-bp deletion.E Sanger sequencing chromatogram of iG7-1B, one of the G1 offspring from founder iG7 shows transmission of a 874-bp deletion.F Summary of transmitted deletion alleles from founders iG6 and iG7.*This G1 animal was from a second litter of founder iG6 shown in Additional file 3: Fig. S3 Fig. 5 Fig. 5 Targeting efficiency of Rai1 gRNAs by 2-cell microinjection of in vitro cultured embryos.Gel image showing that 7 out of 9 embryos carry deletions with bands at lower molecular weights than WT Table 1 Egg and embryo yields from superovulated and natural mating females PMSG/hCG were administered at 15 IU in all superovulation groups except group #6, which received 20 IU a Fertilization rates were determined as the number of fertilized eggs divided by the total number of eggs from sperm positive females for each group of Table 2 Outcome of Nile grass rat targeting and litters following i-GONAD Table 3 Primers for Rai1 genotyping
7,998.2
2024-07-02T00:00:00.000
[ "Biology", "Environmental Science" ]
Femtosecond two-photon laser-induced fluorescence imaging of atomic hydrogen in a laminar methane-air flame assisted by nanosecond repetitively pulsed discharges Sustainable and low-emission combustion is in need of novel schemes to enhance combustion efficiency and control to meet up with new emission standards and comply with varying quality of renewable fuels. Plasma actuation is a promising candidate to achieve this goal but few detailed experiments have been carried out that target how specific combustion and plasma related species are affected by the coupling of plasma and combustion chemistry. Atomic hydrogen is such a species that here is imaged by using the two-photon absorption laser induced fluorescence (TALIF) technique as an atmospheric pressure methane–air flame is actuated by nanosecond repetitively pulsed (NRP) discharges. Atomic hydrogen is observed both in the flame and in the discharge channel and plasma actuation results in a wide modification of the flame shape. A local 50% increase of fluorescence occurs at the flame front where it is crossed by the discharge. Atomic hydrogen in the discharge channel in the fresh-gases is found to decay with a time constant of about 2.4 μ s. These results provide new insights on the plasma flame interaction at atmospheric pressure that can be further used for cross-validation of numerical calculations. Introduction In the last decades, non-equilibrium plasma produced by nanosecond repetitively pulsed (NRP) discharges has shown 4 Author to whom any correspondence should be addressed. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. promising ability for combustion enhancement [1,2].For example, significant reduction of the ignition delay time [3][4][5] and the lean-flammability limit [6,7], as well as the control of flame dynamics [8], can be achieved with the aid of NRP discharges.Their efficiency is attributed to a coupled thermal and chemical activation of the reacting flows [2]. For combustion applications, atomic hydrogen (H) and oxygen (O) are two of the main active species produced in NRP discharges [2,9].They can be produced by direct electron impact reactions (CH 4 + e → CH 3 + H + e and O 2 + e → O + O + e), or by quenching of excited nitrogen [10][11][12].The presence of significant amounts of atomic O and H accelerates the chain branching reactions and leads to ignition, or to an increase of the flame speed, facilitating and enhancing combustion.This mainly applies to discharges developing in the fresh gases, but discharges can also be applied through the flame front [13].In a developed flame, atomic H is then involved in two of the most important high-temperature reactions in combustion (H + O 2 → OH + O and CO + OH → CO 2 + H) [14].Therefore, measurements of spatial distribution and temporal evolution of key intermediate species such as atomic H and O are vital to the understanding of the physicochemical mechanisms of the NRP discharge action on the flame, and provide data for validating numerical models. The two-photon absorption laser induced fluorescence (TALIF) technique for atomic species concentration measurements was developed in the '80s [15] and over time several multiphoton excitation schemes have been explored for atomic H [16,17].The most commonly used is from the ground state, i.e. n = 1, to the n = 3 state via two-photon excitation, and subsequent deexcitation to n = 2 state, resulting in a photon emitted at 656.3 nm, i.e. the Balmer-α line.This detection scheme is also employed in the current work.Compared to using nanosecond laser, pico-and femtosecond lasers have shown apparent superiority for TALIF due to their high peak power and modest pulse energy, which allow efficient multi-photon excitation with minimal photolytic interference [18,19].In addition, for atmospheric-pressure quantitative measurements, ps or fs pulse duration is needed to measure the fast atomic H fluorescence decay that, in a flame environment, can be as short as 60 ps [20], and therefore have access to the quenching rate.One may think that the large bandwidth of fs pulses would be detrimental for the excitation efficiency, which is the case for single-photon excitation, but for two-photon excitation, efficient excitation is still obtained since a large number of photon-pairs match the two-photon resonance.[21].Most flame investigations with TALIF have been carried out at low pressures, which is favorable when applying this technique because of thicker reactions zones and lower quenching effect [22].However, experiments in the current study are conducted at atmospheric pressure. This technique has been applied to fields where atomic hydrogen is relevant, such as combustion and plasma diagnostics.It has been proved to perform well for measurements and imaging of atomic hydrogen in thermal [23] and nonthermal plasmas, such as in RF discharges [24,25] and low pressure nanosecond discharges for different gas mixtures [26,27].Atomic hydrogen has been studied in several subatmospheric [22] and atmospheric flames, mostly involving H 2 and CH 4 .Significant effort was spent on evaluating photolytic interference effects [18,28] and quantitative measurements [20,29,30], which in most cases were limited to point measurements.A recent work [21] studied the H production in a nanosecond discharge occurring in the burnt post flame gases of a Hencken flame.However, the authors have not found any experimental investigations of H atoms in a flame assisted by NRP discharges. The objective of this paper is to demonstrate fs-TALIF imaging of hydrogen in a laminar methane-air plasma-assisted flame.The results we present consist of time-resolved 2D images of atomic H fluorescence, used to assess the plasma effect on the atomic H local densities.Here, a comprehensive strategy to achieve a reliable fluorescence imaging, from the measurement procedures to the data post-processing, will be demonstrated in detail.We observed that the occurrence of the discharge-induced plasma will significantly modify the spatial distribution of H-atom fluorescence in the flame and enhance its yield as well.Also, we investigated its temporal dynamics during a discharge cycle, which suggests that the steady-state plasma-assisted-flame system does not sensitively respond to single discharge pulse forcing at these experimental conditions.This paper is organized as follows: in section 2, we describe our experimental apparatuses, including the burner designed for plasma-assisted experiments, the femtosecond laser system, and the TALIF detection setup.Since TALIF raw images need to be corrected, we dedicate section 3 to the experimental procedure and the processing methodology of the raw image.In section 4, we present and discuss the results.Finally, we conclude in section 6 with a brief summary. Experimental setup The experimental setup is shown in figure 1.It comprises a plasma-assisted combustion (PAC) burner, a fs laser system and a detection system, which are described in detail in the following subsections. Plasma assisted combustion burner The PAC burner consists of a laminar stagnation plate burner fed with a lean methane-air pre-mixture and a nitrogen coflow, shown schematically in figure 2. The average bulk velocity of the flammable mixture is 1.2 m s −1 and the equivalence ratio is set to 0.76.For these conditions, the thermal power of the flame is about 220 W. The flame stabilizes roughly in the middle of a 10 mm gap between the nozzle and a quartz stagnation plate.Operating conditions are carefully chosen in order to have a stable flame, which is required for phase-locked imaging of the discharges. The burner is made out of non-conductive material (polyether ether ketone), allowing an easier integration with high voltage components.Additional details of the burner can be found in [31].Two pin electrodes are used: one placed in the quartz plate and the other located inside the nozzle along the symmetry axis of the flame.The cathode is made out of a 1 mm diameter pure tungsten welding electrode and features a conical tip.The anode instead consists of a thin electrochemically etched tungsten wire.Even though this wire is only 0.1 mm in diameter, it still affects the velocity profile of the flow at the outlet, preventing the flame surface from being flat (see figure 3(a)).This electrode arrangement allows the discharge to cross the flame perpendicularly and to develop in both cold and hot gases.A high voltage nanosecond discharge generator (FPG Series, FID GmbH) is connected to the electrodes for the creation of the discharge.A BNC 575 delay generator (not shown in figure 1) is employed to generate the desired pulse repetition frequency and to synchronize the discharges with the laser system and camera.Tuning the delay between the discharge pulse and the camera acquisition allows to perform TALIF imaging at an arbitrary time after the discharge event.We observed an overall jitter of about ±13 ns that limited our ability to investigate the hydrogen fluorescence very close to the onset of the discharge. NRP discharges can appear and behave quite differently in different experimental condition where parameters such as voltage, pulse repetition frequency, voltage rise-time, electrodes geometry, gap spacing, gas composition in the gap, pressure, etc are known to affect the discharge [32].Discharges in the glow regime [33,34] are considered in the experiments discussed in this paper.This particular non-thermal discharge regime allows low power deposition, high chemical reactivity, and low gas temperature, making it interesting for applications [33]. A high voltage probe (Tektronix P6015A) is used to measure the voltage applied to the electrodes and a Pearson current monitor 6585 for measuring the current.Typical results are shown in figure 4. The purpose of these electrical measurements is to provide estimations of the plasma power.The instantaneous discharge power can be computed by multiplying voltage and current traces, and the energy is obtained by integrating the power over the duration of a discharge event. In the voltage trace shown in figure 4, one can recognize the 10 ns long high voltage pulse, and observe some ringing and a significant overshoot.Interestingly, the energy transfer seems to occur during the first 140 ns after the high voltage pulse before settling to its final value. In this study, the chosen applied voltage is 8.4 kV and the pulse repetition frequency is set to 10 kHz.Under these conditions, typical values of plasma power are 1-2 W, corresponding to less than 1% of the thermal power of the flame. The PAC burner is mounted on top of a translation stage, which moves vertically (perpendicularly to the page plane in figure 1) and it is synchronized with the camera, allowing fast multi-frame acquisitions of the same object at different heights for an automatic flame scanning.Alternatively, the PAC burner can be replaced by a McKenna burner with a central nozzle capable of generating a narrow cone flame, suitable for calibration measurements. Femtosecond laser system A Ti:sapphire chirped pulse amplification (CPA) femtosecond laser system (coherent, Hidra-50) delivers 800 nm laser pulses with a duration of 125 fs, at 10 Hz repetition rate.The laser beam then pumps a travelling wave optical parametric amplifier (Light Conversion, HE-TOPAS-PRIME) followed by a frequency mixing apparatus (NirUVis unit), which is finally capable of providing the required 205 nm laser pulses with pulse energy ∼35 μJ/pulse for exciting atomic hydrogen, from n = 1 to n = 3, while detection occurs at 656 nm [15].The 205 nm laser beam, roughly 5 mm in diameter, is then focused with a cylindrical lens to form a vertical laser sheet, with the same height of 5 mm and an estimated thickness of about 200 μm, right across the flame in the PAC burner. Detection system An intensified charge-coupled device (ICCD) camera (Princeton instruments, PI-MAX4 1024f ), fitted with a Nikon Nikkor 135 mm f/2.8 lens is used to capture the 656 nm H-atom fluorescence.The camera is synchronized with the laser and is setup in a 90 • -side configuration perpendicular to the laser beam propagation direction.Typical camera acquisition settings are 3 ns gate width and 150 on-chip accumulations.Suppression of the background radiation is achieved with a narrowband band-pass interference filter with a center wavelength at 655 nm (Semrock, FF01-655/15-25). Simultaneously with fluorescence measurements, an energy meter (Gentec, SOLO 2) placed right after the burner, is used to continuously log the laser pulse energy.A spectrometer (Princeton instruments, Acton SP2500, spectral resolution ∼0.018 nm) is used to facilitate spectral analysis of the emission signals. Figure 5(a) shows the measured fluorescence spectrum.The spectral peak is centered at 656 nm and the peak intensity is sensitive to the detuning of the excitation wavelength, confirming that the fluorescence signal comes from H atoms. Figure 5(b) shows the excitation spectrum, from which we can see that the optimal excitation wavelength for detecting H atoms is about 204.67 nm.The rather broad excitation peak is a result of the large linewidth, about 0.5 nm, of the fs excitation laser. Considerations of possible interferences In ideal conditions the TALIF process would result in a squared dependence of the fluorescence yield on the pump laser energy.However, the laser pulse could photolytically generate additional H [35] that is then detected, leading to a higher than quadratic energy dependence.Two main precursors, H 2 O [28] and CH 3 [36,37], are both abundant in the flame.At the same time, stimulated emission could lead to a dependence index lower than quadratic [38,39].This latter possibility can be ruled out since we did not observe any signature of stimulated emission of H atoms in the forward or backward direction.Additionally, interference from O 2 photolysis, leading to O-atom production, that can deplete the H-atom population is also ruled out, given that the laser fluence used in this experiment (6 mJ cm −2 ) is more than 60 times lower than reported fluence thresholds for similar conditions [35]. In order to estimate the possible impact of photolytic interference, we measured the laser pulse energy dependence of the H fluorescence signal, as shown in figure 6.For this characterization, the PAC burner is operated without applying any discharge.Each experimental data point accumulates the fluorescence signal over 100 laser shots, while the error bar shows the laser pulse energy standard deviation over the same laser shots.A second order power function is also shown together with experimental data in figure 6, showing a good agreement.It suggests that in the range of the pump pulse energies we employed in the experiments, photolytic interference is not a significant issue. A more robust way to evaluate effect of photolytic interference would be to analyze the spatial shape of the fluorescence signal for different laser pulse energies [35,40], since even in case of severe photolytic interference sometimes the power-law may still not significantly differ from 2 [35].This more advanced validation procedure will be performed in further investigations dedicated to quantitative measurements of atomic species densities. Methodology In this section, we describe the procedure of 2D H-TALIF image acquisition and post-processing in detail.A flow chart of the procedure is shown in figure 7. The main steps can be summarized as follows: • Laser beam profile characterization by moving a Bunsen flame along the beam direction in order to scan the full area of interest; • Collection of several images by moving the PAC flame vertically in order to scan the full area of interest; • Image processing: corrections accounting for beam profile, energy fluctuations and occlusion and finally image stacking. Details of each step are described in the following subsections. Laser beam profile characterization In a real laser sheet, the laser intensity is not uniform.It has a 2D distribution, depending on the actual beam profile and how the beam is focused into a sheet.Since the fluorescence response depends on the laser intensity squared, spatial nonuniformity of laser intensity has a direct and significant influence on the fluorescence distribution, and must be accounted for. The laser intensity profile can be characterized by measuring the TALIF signal from a known spatial distribution of atomic H. Unfortunately, it is not possible to fill a volume with a uniform concentration of atomic H, and other techniques must be used.For example, in [41] a homogeneous concentration of krypton was employed, since krypton features similar excitation and detection scheme.The resulting TALIF map was used to normalize the measurements, correcting for the non-uniform laser intensity distribution. In the present study, a different approach is used to obtain the correcting map.The atomic H naturally present in a ∼25 mm tall CH 4 -air conical flame is used to probe the laser sheet at different locations.This flame has been chosen because the H radial profiles at slightly different heights are very similar (this assumption is acceptable if the height of the laser sheet is small compared to the height of the cone flame [18]).The laser beam crosses the flame around its middle part, generating a TALIF signal as shown in figure 8(a).Around 130 TALIF images are collected while moving the flame along the laser beam.By properly averaging all those images, an intensity map can be obtained, as in figure 8(b), that approximates the TALIF response to an uniform atomic H field. Figure 8(b) confirms that the laser intensity in the sheet is far from uniform. In the following, this correction map will be referred as laser beam profile.The characterization just described is conducted twice, before and after any set of measurements, and the average map is used for correction.It is worth noting that the laser beam profile obtained with this method is also affected by the non-uniform pixel responsivity and intensifier gain.These effects are corrected as well when a TALIF raw image is divided by the correction map. H-TALIF image acquisition and collection Several measurements are performed at different heights (z) by moving the burner vertically in order to cover the full region of interest.In these measurements, 7 different positions are considered with a ∼0.9 mm displacement between two adjacent positions.Recall that typical camera acquisition settings for each frame at a certain position are 3 ns gate width and 150 on-chip accumulations.Given that the laser repetition frequency is 10 Hz, the time required for a complete measurement is roughly 2 min.Examples of these frames at different z can be seen in figure 9(a). In this paper we present TALIF images that are collected at different times after the discharge event.We define t = 0 as the time when we observe the maximum emission from the discharge.At around 656 nm there is a broadband plasma emission [3] whose intensity, integrated over a 3 ns gate width, appears to be a couple of order of magnitude larger than the H fluorescence signal.This fact, together with the ∼ ±13 ns jitter discussed in section 2.1, prevents acquisition of meaningful data close to t = 0.The strong plasma emission quickly decays via fast collisional quenching, which takes ∼30 ns [42].After about 50 ns the background plasma emission becomes negligible and reliable data of the H fluorescence can be acquired.Measurements are repeated twice to evaluate repeatability. H-TALIF image corrections Several corrections are applied to the raw TALIF images (see figure 9(a)), as discussed in the following: • Background correction: the background images are captured with the laser excitation wavelength tuned 3 nm off-resonance, and subtracted from the TALIF images.• Beam non-uniformity and camera correction: each raw image is corrected by using the laser beam profile obtained as described in section 3.1.Thanks to that, corrections for non-uniformity in the laser intensity distribution, in pixel responsivity and intensifier gain can be accomplished.• Collection efficiency correction: the flame stabilizes few mm away from a large quartz plate.For this reason, part of the fluorescence emitted close to the plate will not be able to reach the camera, and because of the occlusion by the plate a lower signal intensity will be recorded. The collection efficiency is calculated on geometrical grounds.The occlusion scheme will be different for each burner position, so each raw image frame in figure 9(a) has to be multiplied by its own corresponding collection efficiency. • Energy correction: each image is re-scaled using the average squared pulse energy during the collection of each image itself, to correct for any laser energy fluctuation.We noticed that this procedure generally leads to a slight overcorrection, however it still helps reducing data spread between repeated measurements. Frame stacking and final H-TALIF image After corrections, individual frames are merged by means of a weighted average, using the laser intensity profile described in section 3.1 as weight.A similar approach for images stacking can be found in [21] (page 72).Finally, the processed TALIF image, after all corrections and stacking of all frames (figure 9(a)), is achieved as shown in figure 9(b).On the fresh-gases side of the discharge channel (0 z 1.8 mm in figure 11) the H fluorescence gradually decreases with increasing time delay after the discharge.Figure 12 shows the averaged H fluorescence signal in the mentioned area as a function of time.Data points can be fitted by an exponential decay function, suggesting a time constant of about 2.5 μs. Results Figure 13 shows the averaged fluorescence over the regions depicted in the inset picture ( flame tip), as a function of the time delay in a discharge cycle.Results obtained in the base flame (no discharge) are also shown for comparison.A 40% increase of average fluorescence signal can be obtained with discharge compared to base flame.A similar observation could be done by analyzing figure 11 and checking the values at the flame tip location.Actual numbers in figures 11 and 13 may be slightly different due to different locations/sizes of the boxes within which averages are computed.The intensity plot in figure 13 suggest a slightly declining trend during the 100 μs inter-discharge period. The flame appears not to respond to occurrence of individual discharge channels, not in terms of the flame front locationcorresponding to the steepest gradient region in figure 11 and also not in terms of H fluorescence during the 100 μs forcing period (figure 13). From TALIF to concentration Once non-uniformity effects in the laser beam are taken into account and provided that photolytic effects are negligible, collisional quenching of excited atoms is the main source for the difference between the fluorescence map and the H concentration map.The quenching rate affects the fluorescence yield [22], and it depends on the nature and the number density of the collisional partners.According to [19] the quenching rate can be up to 7 times larger in the cold gases compared to the hot flame front region, therefore different locations having the same TALIF signal level might have quite different H number density.Following [43], the H-TALIF signal can be roughly expressed as S TALIF ∝ N 0 /(Q 2 + A 2 ) where N 0 is the concentration of H atoms in the ground state, Q 2 the quenching rate and A 2 the Einstein coefficient for spontaneous emission.The quenching rate, in principle, could be calculated from the position-dependent temperature and flame composition and the species and temperature dependent quenching cross sections [18].Besides the difficulties of mapping the 2D temperature and composition, a comprehensive set of the quenching cross sections of atomic hydrogen is also not available for the current experimental conditions.Most of the available quenching rate constants [29] are provided at room temperature and it is not clear how to extrapolate them to high temperature [18,21].Therefore, quantitative analysis of H-TALIF signal is challenging and direct point measurements of quenching rates are preferred for quantitative TALIF [21].Two-dimensional quantitative measurements is in principle also possible for TALIF of H by using fluorescence lifetime imaging [44]. Nevertheless, if local conditions (such as composition and temperature) are not very different, then the fluorescence intensity can be used to infer trends of actual atomic H concentration.This allows us to estimate the relative change in H concentration in specific points.As an example, at the flame tip, we can assume that collisional quenching does not vary dramatically when we apply discharges.This is supported by the fact that NRP glow discharges do not change the flame temperature much (see for example [45]), neither the concentration of major species (see for example [42]).Then the relative change in the TALIF signal can be assumed to represent the change in H number density. For a methane-air flame of 0.76-equivalence ratio at ambient conditions, 1D simulations of a wall stabilized flame (using Cantera and GRI-Mech 3.0) predict a peak H number density at the flame front, close to 8 × 10 15 cm −3 , in the case plasma.Based on the results presented in figure 11, a rough estimation of the H density in the discharge channel can be carried out.Using the same procedure as described in [19], taking the quenching constants and H natural fluorescence lifetime from [21,29] the estimated species concentrations form the 1D simulation, we found the fluorescence yield A 2 /(A 2 + Q 2 ) to be about 5.6 times lower in the cold gases than in the flame front.At about 150 ns after the discharge, the average fluorescence signal in the part of the discharge occurring in the fresh gases (dark blue curve, z = 1 mm) is about 40% of the peak intensity in the flame front (black curve, z = 4 mm).From those numbers, we obtain that the NRP discharges produce about 1.8 × 10 16 cm −3 of atomic H, upstream of the flame front.This local production of atomic H, comparable with the atomic H density naturally present in the flame, could explain the strong effect of the NRP discharges on the stabilization height of the flame.Assuming that the electron impact reaction (CH 4 + e → CH 3 + H + e) is the main source of atomic H production, this atomic H density would correspond to about 1% of dissociation of CH 4 .Quantitative measurements are necessary to validate this simplified analysis. Discussion of the results A recent study on a CH 4 -air Bunsen flame [18] showed that the peak of fluorescence emission was located at the flame tip.That fluorescence peak was attributed to the diffusional focusing of H radicals.In our case, even without plasma actuation, we observe a significant fluorescence response in the flame tip region (figure 10(a)).Flame curvature and fast H diffusion in the hot gases could play a role there.A local increase of atomic H fluorescence at the flame tip could also be induced by a higher local flame temperature, increasing the local fluorescence signal.Higher temperatures can lead to (i) a higher atomic H production, (ii) a decrease in the overall gas density and (iii) a change in the quenching rates.By changing the unburned temperature in the 1D simulation described in the previous section, and estimating the fluorescence response as ∝ [H] A 2 (A 2 + Q 2 ) −1 , we verified that the overall effect of a temperature increase would be an increase in both the atomic H fluorescence and number density. Plasma forcing causes an increase in the atomic H fluorescence and thus an increase of concentration as well, if assuming similar quenching environment.This can be observed in the 2D fluorescence images in figure 10 as well as in figures 11 and 13.In the flame tip region, crossed by the discharge, a local increase of the H concentration up to 50% is observed.This production of atomic H is usually referred to during analysis of the chemical impact of nonequilibrium discharges on combustion [2,9].However, only a few studies presented some observations of atomic H production by nanosecond discharges in combustion environment (see for example [21]), and only for point measurements.In the present study, the 2D imaging of atomic H fluorescence allows a discussion on the local effect of NRP glow discharges in the fresh gases, in the flame front and in the burnt gases. Atomic H fluorescence can be detected in the discharge channel developing in the fresh gases between the flame tip and the bottom electrode, but not on the burnt-gases side of the discharge.There might be several reasons for this.The first fact to consider is that the discharge channel in the burnt gases appears wider (see figure 3(b)) compared to the one developing in the fresh gases, because of the lower density in the hot gases and possibly because of the fact that the discharge does not occur always exactly in same location.In this situation any production of atomic H as the result of the electron impact reaction would be spread in a wider volume, making the detection of any extra H more difficult.Another factor may be the difference in the H precursors on the two sides of the discharges.On the fresh-gases side H stems from CH 4 , while in the burnt gases its originates from H 2 O.In [21], H concentration is reported to increase by about one order of magnitude after ns-discharges applied in the burnt gases.In that case, the discharge was probably in the spark regime, causing a significant increase in temperature and about 90% of the increase in H was attributed to thermochemistry, while the reminder 10% to plasma-enhanced kinetics. The lifetime of H in the fresh-gases discharge region appears to be around 2.5 μs.This was estimated directly by the decline of the fluorescence signal in the discharge channel in figure 12.We estimate that in 100 μs H molecular diffusion could account for the displacement of a mere 40% of the H atoms from the laser sheet.Since the observed decay time is on the order of few μs the molecular diffusion process can be regarded as not dominant and therefore neglected in this analysis.Effects related to thermal expansions have been neglected as well. Most of the H generated in the fresh gases is not reaching the flame because it is consumed earlier, within few μs.Atomic hydrogen in post-discharge chemistry, besides recombining, may as well be consumed in the production of radicals such as HO 2 and OH [46,47].This will need to be further studied. Conclusions In this work 2D fs-TALIF imaging of atomic hydrogen in a lean methane-air flame crossed by NRP glow discharges has been demonstrated.A local increase of the H fluorescence up to about 50% has been observed due to the plasma forcing, which is particularly pronounced at the flame tip.Variation of H concentration in the flame during a discharge forcing cycle of 100 μs has shown to be minimal, which suggests that the flame does not respond to individual discharge events.Also, H atoms produced in the discharge channel could be detected in the unburned region, and their decay time has been estimated to be in the order of 2.5 μs.Estimated peak H number densities in the flame front without plasma are about 8 ×10 15 cm −3 while on the fresh gas side of the discharge channel, about 1.8 ×10 16 cm −3 .However, further investigation will be necessary to obtain a quantitative measurement of the atomic hydrogen density and its evolution in the plasma-assisted flame. Figure 1 . Figure 1.Schematic illustration of the experimental setup (top view). Figure 2 . Figure 2. Schematic illustration of the plasma-assisted combustion (PAC) burner used in this study. Figure 3 . Figure 3. Photos of the experimental volume captured with a DSLR camera with exposure time of 1/80 s.(a) Shows the base flame without discharge and (b) the flame under plasma actuation (8 kV, 10 kHz). Figure 4 . Figure 4. Typical voltage and current traces during a discharge event.Estimated deposited energy is also shown. Figure 5 . Figure 5. (a) fs-TALIF spectrum obtained with a fully open slit (3 mm) of the spectrometer, and (b) average fluorescence signal from the central flame region as a function of the excitation wavelength, obtained with the ICCD camera.These measurements were performed on the flame with no discharge. Figure 6 . Figure 6.Pump pulse energy dependence of H fluorescence.The grey line represents a curve with power index of 2. The inset shows the region of the flame considered in the analysis. Figure 7 . Figure 7. Procedures used for data collection and post-processing. Figure 8 . Figure 8. Details on the procedure for characterizing the laser sheet.(a) Example of the H fluorescence signal form the conical premixed ame used to characterize the laser beam prole.(b) Result of the laser beam prole characterization.In this specic case the focusing region was slightly on the left of the region of interest. Figure 9 . Figure 9. Illustration of post-processing.(a) Raw individual frames to be stacked.(b) Composed image, after all corrections and frame stacking. Figure 10 (Figure 10 . Figure 10(a) shows an H-TALIF image of the flame when no discharge is applied (base flame).Fluorescence signal can be seen in the flame front and in the post-flame region.The flame tip is slightly brighter than any other region in the flame.Figures10(b)-(d)show H-TALIF images of the plasma-assisted flame for 3 different time delays in the discharge cycle, respectively: 150 ns, 4 μs and 40 μs.Compared to the case of base flame, it is obvious that glow discharges impact the spatial distribution of atomic hydrogen and the following features can be noticed: (1) the distribution of H is stretched in the vertical direction as a result of change in flame shape, corresponding to figure3(b).(2) The H fluorescence intensity increases, particularly in the tip region of the flame.(3) H fluorescence can be detected in the discharge channel between the flame tip and the bottom electrode, as can be seen in figure10(b).It decays within few μs, and it can barely be seen after 4 μs in figure10(c).(4) Flame shape and tip position barely change during the forcing cycle.(5) Figure 11 . Figure 11.Fluorescence along the flame center-line for selected delays after discharge.Solid lines represent the average value of two repeated measurements, while lightly colored areas represent the spread between them. Figure 12 . Figure 12.Atomic fluorescence in the fresh-gases side of the discharge channel for different delays.The fitted exponential decay has a characteristic time of 2.5 μs. Figure 13 . Figure 13.H fluorescence in the flame tip versus time after discharge.Error-bars represent the spread between two repeated measurements while round markers represent their average value.Values are normalized with respect to the case without discharge, that is shown on the right side of the plot (time axis has no meaning for the base flame).
7,856.8
2020-05-12T00:00:00.000
[ "Engineering", "Physics", "Chemistry", "Environmental Science" ]
Role of hypoxia and glycolysis in the development of multi-drug resistance in human tumor cells and the establishment of an orthotopic multi-drug resistant tumor model in nude mice using hypoxic pre-conditioning Background The development of multi-drug resistant (MDR) cancer is a significant challenge in the clinical treatment of recurrent disease. Hypoxia is an environmental selection pressure that contributes to the development of MDR. Many cancer cells, including MDR cells, resort to glycolysis for energy acquisition. This study aimed to explore the relationship between hypoxia, glycolysis, and MDR in a panel of human breast and ovarian cancer cells. A second aim of this study was to develop an orthotopic animal model of MDR breast cancer. Methods Nucleic and basal protein was extracted from a panel of human breast and ovarian cancer cells; MDR cells and cells pre-exposed to either normoxic or hypoxic conditions. Western blotting was used to assess the expression of MDR markers, hypoxia inducible factors, and glycolytic proteins. Tumor xenografts were established in the mammary fat pad of nu/nu mice using human breast cancer cells that were pre-exposed to either hypoxic or normoxic conditions. Immunohistochemistry was used to assess the MDR character of excised tumors. Results Hypoxia induces MDR and glycolysis in vitro, but the cellular response is cell-line specific and duration dependent. Using hypoxic, triple-negative breast cancer cells to establish 100 mm3 tumor xenografts in nude mice is a relevant model for MDR breast cancer. Conclusion Hypoxic pre-conditiong and xenografting may be used to develop a multitude of orthotopic models for MDR cancer aiding in the study and treatment of the disease. Multi-Drug Resistance in Cancer The development of multi-drug resistant (MDR) cancer is a challenge in the treatment of non-responsive, recurrent disease [1][2][3][4][5][6]. MDR refers to a state of resilience against structurally and/or functionally unrelated drugs; MDR can be intrinsic (innate) or acquired through exposure to chemotherapeutic agents [1]. The mechanisms of MDR include decreasing drug influx into a cell, increasing drug efflux out of a cell, increased DNA repair, increased drug metabolism/ detoxification, and decreased apoptosis [7]. The most characterized mechanism of MDR is increased drug efflux through transmembrane pumps [7][8][9]. Over 13 ATP-Binding Cassette (ABC) transporters have been verified to contribute to MDR; of these, P-glycoprotein (Pgp) is the most consistently over-expressed and the most studied ABC transporter involved in the development of MDR cancer [8][9][10]. Membrane-bound Pgp effluxes a broad spectrum of substrates and active efflux requires the hydrolysis of two ATP molecules [7]. A recent study evaluating the cellular onset of MDR identified Pgp over-expression as the primary mechanism of MDR before malignant transformation [6]. Pgp over-expression is associated with poor prognosis in many types of cancer [7]. Other ABC transporters that contribute to MDR include multi-drug resistance protein 1 (MRP-1, ABCC1) and breast cancer resistance protein (BCRP, ABCG2) [9][10][11][12]. Additional proteins, such as growth factor receptors, are also used as markers of MDR; for example, over-expression of epidermal growth factor receptor (EGFR) is often associated with aggressive phenotypes and is used as a MDR marker in certain types of cancer [13][14][15]. Hypoxia and the Tumor Microenvironment Perhaps the most significant contributor that defines the microenvironment of a tumor is the tumor vasculature [16][17][18]. The vascular network provides a tumor with oxygen and nutrients and is an avenue for the tumor to metastasize to remote sites. The importance of tumor vasculature is exploited by the plethora of anti-vasculature and anti-angiogenic cancer therapies [19,20]. Yet this vasculature is highly disorganized and constantly changing. Angiogenesis and vascular destruction are dynamic, ongoing processes; as the tumor is established new blood vessels are formed, this process continues as the tumor grows, but as the tumor propagates and expands blood vessels may be destroyed or cut off [16][17][18]. This haphazard process of neo-and de-vascularization contributes to the evolving phenotype of a tumor. A critical consequence of this fluctuation is a corresponding fluctuation in oxygen and glucose levels which results in heterogeneous states of hypoxia, anaerobic glycolysis (the Pasteur effect), and aerobic glycolysis (the Warburg effect) [17]. States of chronic hypoxia and transient hypoxia may occur and alter within the same tumor mass [21]. Chronic hypoxia occurs when a cell is beyond the diffusion limit of oxygen from a blood vessel (70-100 μm) whereas transient hypoxia occurs due to local oxygen depletion [21]. The cascade of proteome alterations that occurs in response to hypoxia begins with the transcription factor, Hypoxia Inducible Factor (HIF). HIF consists of alpha and beta subunits [22,23]. HIF-1α and HIF-1β are the most common isoforms; expression of HIF-2α and HIF-3α is more limited to healthy (non-cancerous) tissue [23]. Synthesis of the alpha subunit is oxygen independent while degradation is oxygen dependent [22,24]. Under conditions of hypoxia, the alpha subunit of HIF is stabilized and is then able to translocate to the nucleus [22,24,25]. Once localized to the nucleus, HIF-α forms a complex with HIF-β; this activated HIF complex is then able to bind to hypoxia responsive elements (HRE) on target genes inducing transcription [22,24]. The Pasteur and Warburg Effects Traditionally, in the presence of oxygen cells obtain energy through oxidative phosphorylation (OXPHOS) and oxygen inhibits glycolysis; this inhibition is known as the Pasteur effect [39]. Some cancer cells also undergo aerobic glycolysis (the Warburg effect); this phenomenon was first discovered by Otto Warburg in 1930 [39,42-45]. As there is a constant flux in the oxygenation states of a solid tumor (between normoxic, hypoxic, and anoxic levels), increased glycolysis in both the absence and presence of oxygen are important hallmarks of cancer [34,35,[39][40][41]. The biological motivation for increased glycolysis is multifold. First, glycolysis is much safer than oxidative phosphorylation. OXPHOS produces ROS which can be very damaging to the delicate balance of ROS maintained in an abnormal cancer cell; decreasing reliance on OXPHOS is a means of limiting ROS accumulation [39,46]. Secondly, although the net energy yield from glycolysis is much lower than OXPHOS (2 ATP verses 36), the process is much faster, providing a cell with direct energy acquired in the cytoplasm [39]. Third, a decreased reliance on OXPHOS ensures that oxygen does not become a limiting factor to survival [39,47]. Fourth, the glycolytic pathway provides precursors for biomolecules necessary for proliferation [39,47]. Fifth, this increased glycolytic character may provide tumor protection and enhance invasion; the low pH resulting from lactic acid accumulation may sensitize normal cells to invasion while protecting the tumor from the immune system [39,47]. Synergistically, these biological motivations for increasing glycolysis provide a survival advantage for a cancer cell. As noted above, many glycolytic proteins are HIF targets, HIF is activated by many factors independent of oxygen levels, and HIF has been established to contribute to both the Pasteur and Warburg effects. Due to the fluctuating oxygenation states within a tumor it is possible that hypoxia induced glycolysis (the Pasteur effect) may pre-condition cancer cells for aerobic glycolysis (the Warburg effect). As demonstrated by the schema in Figure 1, the first aim of the current study was to explore the relationship between hypoxia, MDR, and glycolysis in a panel of breast and ovarian cancer cell lines (top half of Figure 1). This was done by exposing the panel of cells to normoxic and hypoxic conditions, extracting the nucleic and basal protein, and examining protein expression through western blotting. Proteins included markers of MDR (Pgp and EGFR), markers of hypoxia (HIF-1α), and glycolytic markers (GLUT1 and HXK2); additional protein markers were also examined. Also demonstrated by the schema in Figure 1, the second aim of this study was to compare the protein expression of MDR, hypoxic, and glycolytic markers in tumor xenografts established from cells preexposed to normoxic conditions and from xenografts established from cells pre-exposed to hypoxic conditions (bottom portion of Figure 1). This was done through immunohistochemistry of the excised tumors. Within the two xenograft groups (normoxic pre-exposure and hypoxic pre-exposure), tumors of three different sizes were examined (100 mm 3 , 250 mm 3 , and 500 mm 3 ). This study resulted in the establishment of an orthotopic model for MDR breast cancer. Cell Culture and Hypoxia For this study, a panel of ovarian and breast cancer cell lines were selected. An established MDR ovarian cancer cell line, SKOV3 TR cells, were used as a positive control for MDR protein character. Corresponding wild-type and hypoxic SKOV3 cells were also used. MDA-MB-231 breast cancer cells, were selected for their aggressive character. These cells are triple negative (negative for estrogen receptors, progesterone receptors, and HER2; human epidermal growth factor receptor 2). Triple negative breast cancer is known to be one of the most aggressive, recurrent multiforms of breast cancer. Another ovarian cancer cell line, OVCAR5 cells, were selected as a less aggressive comparison to the other cell types. MDA-MB-435 cells were selected as a negative control for EGFR expression. SKOV3 cells, MDA-MB-231 cells, and OVCAR5 cells were obtained from ATCC (Manassas, VA). The SKOV3 TR cells and the MDA-MB-435 cells were a kind gift from Dr. Duan (Massachusetts General Hospital, Sarcoma Molecular Biology Laboratory). Cells were plated at very low density and incubated at 37°C and maintained in RPMI-1640 media (Mediatech, Inc; Manassas, VA) supplemented with 10% fetal bovine serum (Gemini Bio-products; West Sacramento, CA) and 1% penicillin/ streptomycin/amphotericin B mixture (Lonza; Walkersville, MD). To create hypoxic conditions using lowoxygen gas; cell culture flasks were placed in a modular incubation chamber (Billups-Rothenberg, Inc.; Del Mar, CA), flushed with a 0.5% O 2 , 5% CO 2 , nitrogen balanced gas for five minutes, and incubated at 37°C for various time points. As per manufacturer recommendations, the chamber was filled at a rate of 20 liters/minute for five minutes for complete saturation. Protein Extraction and Western Blot Analysis Basal and nucleic protein fractions were extracted from cells grown to 90% confluency in 75 cm 2 tissue culture flasks under normoxic and hypoxic conditions. Before extraction, cells were microscopically examined to confirm cell viability and the absence of excessive cell death. Basal protein was extracted using a high salt lysis buffer at 4°C. Nucleic protein was obtained using the NE-PER Nuclear and Cytoplasmic Extraction kit (Pierce Biotechnology; Rockford, IL). Protein concentrations were quantified using the BCA Protein Assay (Pierce Biotechnology). Protein was separated on 4-20% gradient SDS-PAGE gels (PAGEgel, Inc.; San Diego, CA) and transferred onto PVDF membranes (0.45 μm pore; Millipore, Billerica, MA). Membranes were blocked for 30 minutes with StartingBlock™ buffer (Pierce Biotechnology) before an overnight incubation with the primary antibody at 4°C. Membranes were then washed with TBST for 10 minutes (three times) and subsequently incubated with a horseradish peroxidase conjugated secondary antibody for 1 hour. Membranes were again washed with TBST, flash rinsed with deionized distilled water, incubated for 2-10 minutes in an enhanced chemiluminescence substrate (Pierce Biotechnology), and imaged using a Kodak FX Imaging Station (Rochester, NY). All blocking and washing steps were conducted at room temperature. P-glycoprotein antibody was purchased from Calbiochem while the EGFR antibody was purchased from Cell Signaling Technology (Danvers, MA). The remaining primary and secondary antibodies were purchased from Abcam (Cambridge, MA). Protein Quantification Semi-quantitative analysis was performed on the western blot data presented in Figure 2. To this end, Image The second phase of the study entailed selecting one cell line, exposing the human breast cancer cells to either normoxic or hypoxic conditions for five days, and then xenografting these cells into the mammary fat pad of nude mice. After tumors grew to 100 mm 3 , 250 mm 3 , and 500 mm 3 , they were excised and immunohistochemistry (IHC) was used to assess the expression of MDR, hypoxic, and glycolytic markers. Hypoxic pre-conditioning of xenografted cells did result in tumors with more MDR character than tumors established from normoxic cells. EGFR, epidermal growth factor receptor; HIF, hypoxia inducible factor; HXK2, hexokinase 2; Pgp, P-glycoprotein; GLUT1, glucose transporter 1. Figure 2 Protein Expression Analysis. Nucleic protein (A) and basal protein (B) were extracted from the panel of cell lines grown under normoxic (wild-type; WT) and hypoxic conditions (three and five days of hypoxia; 3-day Hyp and 5-day Hyp). The nuclear protein was probed for expression of the hypoxia inducible transcription factors HIF-1α and HIF-2α (TATA-binding protein was used as a nuclear loading control). Basal protein was probed for expression of MDR markers (P-glycoprotein, Pgp; multidrug resistance protein 1, MRP1), EGFR, glycolytic proteins (GLUT-1 glucose transporter; Hexokinase 2, HXK2; glyceraldehyde-3-phosphate dehydrogenase, GAPDH; lactate dehydrogenase, LDH), and mitochondrial ATP synthase. β-actin was used as a loading control for basal protein. J software was used; the integrated density of each band was measured, this value was divided by the integrated density of the control band to determine the relative intensity. The control band for nucleic protein expression was TATA-binding protein while the control band for basal protein expression was β-actin. Animals and Orthotopic Model Development Female nu/nu mice were procured from Charles River Laboratories (Wilmington, MA) and were housed in sterile cages on a 12:12 light/dark cycle with ad libitum acess to food and water. All procedures were approved by the Northeastern University Animal Care and Use Committee. A total of 36 mice were used for this study; 18 received tumor cells pre-exposed to normoxic conditions for five days and 18 received tumor cells preexposed to hypoxic conditions for five days. Each group (normoxic and hypoxic) was further dived into three subgroups of 6 mice based on tumor size (100 mm 3 , 250 mm 3 , and 500 mm 3 ). To establish the xenografts, approximately 2 million human breast cancer cells (normal MDA-MB-231 or hypoxic MDA-MB-231 cells) suspended in a 100 μl of a 50:50 mix of matrigel and serum free medium was injected into the mammary fat pad of the mice while they were under light isoflurane anesthesia. Pre-chilled, sterile syringes with 27 gauge, ½'' needles were used to inject the tumor cells. Syringes were pre-chilled at 4°C to prevent coagulation and immediate gelling of the matrigel. The tumor size was measured every other day using Vernier calipers in two dimensions. Individual tumor volumes were calculated using the formula volume = [length × (width) 2 ]/2 where length is the longest diameter and width is the shortest diameter perpendicular to length. Tumors were allowed to grow to the allocated volumes of 100 mm 3 , 250 mm 3 , and 500 mm 3 . During this time, animals were monitored every alternate day for body weight, eating/drinking behavior, and general health. Once tumors reached the desired size, the mice were euthanized via carbon dioxide inhalation. Tumors were then excised. Immunohistochemistry of Tumors Excised tumors were embedded in section medium (Richard-Allan Neg 50*, Thermo Scientific, Waltham, MA), flash frozen in liquid nitrogen, and stored at -80°C until use. Embedded tumors were thawed to -20°C, cryo-sectioned into 7 μm thick sections, and mounted onto glass slides (SuperFrost Plus ® , Thermo Scientific, Waltham, MA). Sections were outlined with an Aqua Hold Pap Pen (Scientific Device Laboratory, Des Plaines, IL), air dried at room temperature, then stored at -20°C. Sections were then thawed to room temperature and fixed in ice-cold acetone for 10 minutes. Sections were air dried at room temperature for 1 hour, and rinsed in two changes of cold PBS (5 minutes each). Then sections were incubated with 100 μl of IHC Select ® Blocking Reagent (Chemicon, Billerica, MA) in a humidified chamber at 37°C for 30 minutes. The blocking buffer was then drained off, the slides rinsed in PBS, and then each section was incubated with 100 μl of primary antibody diluted in IHC Select ® Antibody Diluent Solution (Chemicon, Billerica, MA), overnight at 4°C. Slides were rinsed in two changes of PBST and each section was incubated with 100 μl of secondary antibody diluted in IHC Select ® Antibody Diluent Solution (Chemicon, Billerica, MA), at room temperature for 30 minutes. Slides were washed in two changes of PBS and incubated with a solution of Alexa Fluor ® 568 phalloidin (to stain F-actin) and Hoechst 33342 (to stain nuclei) (Invitrogen; Carlsbad, CA) for 20 minutes. Slides were rinsed in PBST and dehydrated in 95% ethanol for 2 minutes, and 100% ethanol for two exchanges (3 minutes each). Tissue sections were then immersed with Prolong Gold ® Antifade reagent (Invitrogen; Carlsbad, CA), covered with glass cover slips, and allowed to cure overnight at room temperature. All primary antibodies were from Cell Signaling Technology (Danvers, MA), except for the GLUT-1 antibody which was from Abcam (Cambridge, MA). The secondary antibodies were Alexa Fluor ® 488 goat anti-mouse IgG (H + L) and Alexa Fluor ® 488 goat anti-rabbit IgG (H + L) (Invitrogen; Carlsbad, CA). Slides were imaged using an Olympus IX51 Microscope. Protein Analysis of Hypoxic, MDR, and Glycolytic Markers The panel of cell lines (detailed in the materials and methods) were exposed to hypoxic conditions (0.5% oxygen) for three and five days. The basal protein and nucleic protein from cells exposed to hypoxic and normoxic conditions was extracted and western blotting was used to analyze the expression of hypoxic factors, MDR markers, and downstream proteins induced under hypoxic regulation. Nuclear protein analysis (Figure 2A) revealed that HIF-1α expression was apparent in all cells but was elevated in the MDR cells (lane 2), in cells exposed to hypoxic conditions for 3 days (lanes 3, 6, and 9), and in cells exposed to hypoxic conditions for 5 days (lanes 4, 7 and 10). Interestingly, in the SKOV3 cell line (lane 3) and in the OVCAR5 cell line (lane 9), 3-days of hypoxia was not substantial in inducing HIF-2α nuclear translocation ( Figure 2A). Yet, in the MDA-MB-231 breast cancer cells (lane 6), 3 days of hypoxia did induce nuclear translocation of HIF-2α. Five days of hypoxic exposure resulted in nuclear accumulation of HIF-2α in all three cell lines (lane 4, 7, and 10). Also of significance, HIF-2α nuclear translocation was not evident in the MDR cells. Markers of MDR were also examined in the three cell lines under normoxic and hypoxic conditions ( Figure 2B). The high expression of Pgp in the MDR cells is evident (lane 2). Three days of hypoxic exposure was not sufficient to induce Pgp expression in the SKOV3 cells Although over-expression of growth factors is characteristic of cancer cells in general, this over-expression is a particularly vital survival mechanism for hypoxic MDR tumor cells as these cells are often distal from a constant supply of nutrients and growth factors; hypersensitivity to these factors increases the propensity for growth and maintenance. As expected, EGFR expression was elevated in MDR cells (lane 2) relative to wild type As with other HIF-1α targets, the level of EGFR induction seems to be cell type dependent and related to a threshold of hypoxic exposure. TATA-binding protein was used as a loading control for nucleic protein (Figure 2A) and β-actin was used as a loading control for basal protein ( Figure 2B). To examine the relationship between hypoxia, MDR, and glycolysis the protein expression of four glycolytic proteins that are transcriptionally activated by HIF-1α were analyzed in the panel of cells; GLUT-1 glucose transporter, Hexokinase 2 (HXK2), glyceraldehyde-3phosphate dehydrogenase (GAPDH), and lactate dehydrogenase (LDH) ( Figure 2B). GLUT-1 glucose transporter is significant to MDR hypoxic cell survival in much the same way as EGFR. These cells that are located in poorly or nonvascularized regions are starved for nutrients and increase their prospects of survival by increasing the expression of nutrient importers such as GLUT-1. As demonstrated in Figure Interestingly, expression of mitochondrial ATP synthase, the ATP producing component of OXPHOS coupled to the electron transport chain, did not correlate with hypoxic exposure across the cell lines. There was no difference in expression between the MDR cells and the normoxic SKOV3 cells while five days of hypoxia decreased expression in this cell line. Five days of hypoxia also decreased expression in the OVCAR5 cell line. On the other hand, hypoxia (three and five days) increased the expression of ATP synthase in the MDA-MB-231 cell line. Semi-quantitative analysis was performed on the western blot data presented in Figure 2. The results of this analysis are presented in Figure 3, Figure 4, and Figure 5. The semi-quantitative analysis is consistent with the interpretations of the western blot data; hypoxic exposure appears to increase the expression of HIF-1α, HIF-2α, Pgp, MRP-1, EGFR, and the glycolytic proteins in the SKOV3 and MDA-MB-231 cells yet the OVCAR5 cells appear to be resistant to hypoxia induced MDR and hypoxia induced glycolysis. Development and Characterization of MDR Tumor Xenografts The MDA-MB-231 cells were selected for hypoxic preconditioning and tumor xenografting as these cells demonstrated the most notable response to hypoxia in the in vitro studies; hypoxia substantially increased the expression of MDR markers and glycolytic proteins relative to the expression in normoxic cells. A total of 36 mice were used for this study; 18 mice were injected with MDA-MB-231 cells grown under normoxic conditions while 18 were injected with MDA-MB-231 cells pre-exposed to hypoxic conditions. The cells were injected in the mammary fat pad of the mice for the development of orthotopic tumors. Tumors were grown to three sizes; 100 mm 3 , 250 mm 3 , and 500 mm 3 . The objectives of this study was to determine which group resulted in tumors with the most MDR character by assessing (1) if there was a difference between tumors developed from cells grown under normoxic conditions and tumors developed from cells pre-exposed to hypoxic conditions, (2) if there was a difference between tumors of different sizes. To determine such differences between the groups, 6 proteins were selected for tissue immunohistochemistry; Pgp, CD-31, HIF-1α, EGFR, HXK2, and GLUT-1. Tumors were sectioned into 7 uM slices, F-actin was labeled with Alexa Fluor ® 568 phalloidin (red), nuclei were stained with Hoechst 33342 (blue), and the sections were probed for the protein of interest using primary antibodies against the protein and Alexa Fluor ® 488 conjugated secondary antibodies (green). The 100 mm 3 tumors are illustrated in Figure 6, the 250 mm 3 tumors are illustrated in Figure 7, and the 500 mm 3 tumors are illustrated in Figure 8. Figures 6, 7, &8 depict IHC of tumor cores representative of n = 6 for each group. There were demonstrated differences in the protein expression profiles of 100 mm 3 tumors developed from cells grown under normoxic conditions and from tumors developed from cells pre-exposed to hypoxic conditions. There was minimal and localized expression of Pgp in the tumors developed from normoxic cells whereas there was profuse expression of Pgp in the tumors developed from hypoxic cells. There was no apparent difference in CD-31 expression; this marker was used to assess angiogenesis. Again, there was localized and minimal expression of HIF-1α in the tumors developed from cells grown under normoxic conditions whereas there was profuse expression in the tumors derived from hypoxic cells; this expression appeared to be localized with both the cytoplasmic and nucleic fractions of the cells. Both groups of tumors had high levels of EGFR. Expression of hexokinase 2 and GLUT-1 was minimal and localized in the tumors derived from normoxic cells and profuse in the tumors derived from hypoxic cells. These expression trends continued in the 250 mm 3 tumors (Figure 7), although there was less of a difference as the tumors developed from normoxic cells seem to express higher levels of these marker proteins in tumors of this size. Excessive expression of all marker proteins (except CD-31) was apparent in both groups of Figure 3 Semi-quantitative Analysis of HIF-1a and HIF-2a. Image J software was used to determine the relative intensity of protein expression. The integrated density of each band was measured; this value was divided by the integrated density of the TATA-binding protein band to determine the relative intensity. All wild-type, normoxic (WT) cell lines are indicated by checkered green bars, all cell lines exposed to 3days hypoxia (3HYP) are indicated by diagonal-lined pink bars, all cell lines exposed to 5-days of hypoxia (5HYP) are displayed by blue brick bars, and the established MDR cell line (TR) is displayed as the speckled purple bar (the second bar on each graph). The cell lines are SKOV3 (SK), MDA-MB-231 (231), and OVCAR5 (OV). Figure 5 Semi-quantitative Analysis of GLUT-1, HXK2, GAPDH, and LDH. Image J software was used to determine the relative intensity of protein expression. The integrated density of each band was measured; this value was divided by the integrated density of the β-actin protein band to determine the relative intensity. All wild-type, normoxic (WT) cell lines are indicated by checkered green bars, all cell lines exposed to 3days hypoxia (3HYP) are indicated by diagonal-lined pink bars, all cell lines exposed to 5- Semi-quantitative Analysis of P-gp, MRP-1, EGFR, and MITO-ATPase. Image J software was used to determine the relative intensity of protein expression. The integrated density of each band was measured; this value was divided by the integrated density of the β-actin protein band to determine the relative intensity. All wild-type, normoxic (WT) cell lines are indicated by checkered green bars, all cell lines exposed to 3-days hypoxia (3HYP) are indicated by diagonal-lined pink bars, all cell lines exposed to 5-days of hypoxia ( tumors grown to 500 mm 3 (Figure 8). The only substantial CD-31 expression was in the tumors derived from normoxic cells grown to 500 mm 3 (Figure 8). Based on these distinctions, the 100 mm 3 tumors developed from hypoxic cells appear to have the most MDR character that is clearly distinct from the tumors developed from normoxic cells. As the tumor size increased, the differences between the tumors derived from normoxic and hypoxic cells decreased. There was also a remarkable difference in the growth rate of tumors developed from normoxic cells and those developed from hypoxic cells ( Figure 9). As demonstrated in Figure 9, the tumors developed from hypoxic cells reached a 100 mm 3 size within 3 weeks whereas it took between 7-8 weeks for the tumors developed from normoxic cells to reach this size. After reaching 100 mm 3 , the differences in the rates of tumor growth decreased as it took 2.5-3 months for both groups of tumors to reach 500 mm 3 (not shown). To further examine the distinctions between the 100 mm 3 tumors developed from normoxic cells and those developed from hypoxic cells, IHC of the tumor perimeter was examined ( Figure 10 and Figure 11). (Figures 6, 7, and 8 represent the tumor cores). There is a consistent difference between the expression of these proteins in the core ( Figure 6) and perimeter ( Figure 10 and 11) of tumors developed from normoxic cells and in those developed from hypoxic cells. As with the tumor core, there is a low level of Pgp expression in the perimeter of tumors developed from normoxic cells. On the other hand, Pgp expression seemed higher in the perimeter of tumors developed from hypoxic cells than in the core. Hif-1α expression was also higher in the perimeter of tumors derived from hypoxic cells than in the core. Hif-1α expression was low in the core of tumors derived from normoxic cells and non-detectable in the perimeter. EGFR expression appeared to be decreased in the perimeter of tumors grown from normoxic cells relative to the core. Conversely, EGFR expression maintained high expression levels in both the perimeter and core of tumors developed from hypoxic cells. There was no apparent difference in CD-31 expression between the perimeter and core of both groups of tumors as all sections had either very low or non-detectable levels of this protein. The expression of hexokinase 2 was consistent between the tumor perimeter and tumor core for both groups of tumors; with low and localized levels in the tumor sections developed from normoxic cells and highly profuse expression in the tumor sections developed from Figure 8 Immunohistochemistry of 500 mm 3 Normoxic and Hypoxic Tumor Xenografts. Tissue sections were probed with primary antibodies against the protein of interest, then labeled with Alexa Fluor ® 488 conjugated secondary antibodies (green). F-actin was stained with Alexa Fluor ® 568 phalloidin (red) and nuclei were stained with Hoechst 33342 (blue). These images represent protein expression in the tumor core. hypoxic cells. GLUT-1 expression was high in both the tumor perimeter and core of tumor sections derived from hypoxic cells whereas the perimeter of tumors developed from normoxic cells had higher and more profuse expression than the core where expression was very localized. Collectively, the expression of Pgp and Hif-1α was elevated in the perimeter of tumors established from hypoxic cells relative to the core while the perimeter of tumors established from normoxic cells had lower expression of EGFR and higher expression of GLUT-1 relative to the tumor core. Based on the differences in Figure 9 Normoxic and Hypoxic tumor Growth; Time to Achieve 100 mm 3 . Normoxic and Hypoxic tumor xenografts were established in the mammary fat pad of female nude mice and tumor growth was monitored until 100 mm 3 tumors were achieved. The tumor size was measured every other day using Vernier calipers in two dimensions. Individual tumor volumes were calculated using the formula volume = [length × (width) 2 ]/2 where the length was the longest diameter and the width was the shortest diameter perpendicular to length. For each group, n = 6. Each data point represents the mean ± SD. protein expression between the groups of tumors in both the perimeter and the core, as well as the differences in the early growth rates of the tumors, it was inferred that 100 mm 3 tumors developed from hypoxic cells have more MDR character than 100 mm 3 tumors developed from normoxic cells. These tumors established from hypoxic pre-conditioned cells may be a useful model for the study of MDR breast cancer. Inclusion of MDA-MB-435 Cells Although the lack of EGFR expression is well characterized in the MDA-MB-435 cell line, the actual origin of this cell line is controversial. Recent studies suggest that this cell line has been incorrectly characterized as a breast carcinoma cell line and is more likely derived from melanocytes [48]. Regardless of the controversy, this cell line was included in these experiments as a negative control for EGFR-expression. Protein Changes in Response to Hypoxia Expression of active HIF-1α in the nucleus of cells grown under normoxic conditions is most likely a cellular response to previously mentioned oxygenindependent factors (such as EGFR or phosphatidylinositol 3-kinase), or a response to stress factors and homeostatic dis-regulation that is characteristic of cancer. For example, von Hippel-Lindau is a tumor suppressor protein that contributes to the transformation of certain types of cancer, such as renal cell carcinoma, via negative regulation of HIF-1α [22,49,50]. An interesting distinction between HIF-1α and HIF-2α expression was that HIF-1α expression was apparent in all cell lines whereas HIF-2α appeared only to be induced by hypoxia. This suggests that hypoxia induces the nuclear translocation of HIF-2α, yet oxygen-independent factors and conditions of cell stress (such as those in the MDR cells) are not sufficient to induce this translocation. Based on these results, it is likely that both HIF-1α and HIF-2α isoforms are central to the development of MDR but induction of each isoform occurs in response to different cellular and environmental factors. This could result in different MDR phenotypes as the targets of each isoform vary. Increased glycolysis in the absence and presence of oxygen (the Pasteur and Warburg effects) are important hallmarks of cancer. The Warburg Effect is associated with aggressive, MDR cancer [4,21,29,51,52]. Glycolysis is another thread that connects MDR and hypoxia; increased glycolysis is characteristic of MDR cancer and most glycolytic enzymes are HIF-1α targets that are transcriptionally activated in response to hypoxia [21,22,24,26,28,29,[32][33][34][35][36]. Activation of HIF-1α by glycolytic proteins has also been demonstrated, suggesting a feedback loop between the glycolytic and HIF-1α pathways [34]. However, as these results demonstrate, cancer cells do not have a universal response to hypoxia. The Pasteur effect appears to be duration dependent in the SKOV3 and MDA-MB-231 cells yet the OVCAR5 cells seem resistant to hypoxia induced glycolysis. The feedback loop between hypoxia and glycolysis may be more relevant in certain subsets of cancer and non-existent in other types of cancer. The hypoxic transformation, phenotype changes in response to hypoxia, do not appear to be consistent among the cell lines in regard to mitochondrial ATP synthase expression. The hypoxic transformation of MDA-MB-231 cells may lead to such an excessive demand for energy that the expression of both glycolytic and OXPHOS enzymes is increased as a survival response. Hypoxic transformation of SKOV3 cells may depress the cell cycle, decreasing the energetic demands of cell, leading to decreased OXPHOS. It is possible also that these differences in ATP synthase expression correlate more to mitochondrial differences between the cell lines than to a hypoxic response; a more complete profiling of mitochondrial enzymes would be necessary to determine this. Collectively, the protein analysis results indicate that each cell line has a cell-specific threshold for hypoxic transformation. The MDA-MB-231 breast cancer cells underwent the most significant hypoxic transformation with an increase in all proteins examined (HIF transcription factors, HIF targets, MDR proteins, and glycolytic proteins). Although hypoxia induced MDR character and glycolysis in the SKOV3 ovarian cancer cells, the transformation required more chronic hypoxic exposure (five days). Five days of hypoxia was not sufficient in transforming the OVCAR5 cells. These results demonstrate the diverse proteomic responses of the different cell lines. Although the OVCAR5 cells express GLUT-1, these cells appear to have less overall glycolytic character than the other cells examined (lower HXK2, GAPDH, and LDH) and appear to be resistant to hypoxia induced MDR and hypoxia induced glycolysis. These cells may require additional stimuli for hypoxic transformation or may have a genetic resistance to the hypoxia induced HIF cascade. These results also suggest that the distinction between the Pasteur and Warburg effects may not be universal as the degree of hypoxia (percent oxygen depletion) is critical in determining the cell specific response. The Pasteur effect is a suppression of glycolysis in the presence of oxygen whereas the Warburg effect is glycolysis in the presence of oxygen. The current results reveal that there are cellspecific thresholds for the Pasteur effect; additional and more detailed studies would likely reveal a critical oxygen concentration for each cell type where the Pasteur and Warburg effects are difficult to distinguish. Perhaps more significant than the isolated study of the Pasteur and Warburg effects is the unfolding relationship between HIF and glycolysis as this relationship is present in cancer cells under both normoxic and hypoxic conditions [34,35,[39][40][41]. Although unlikely, as cells were plated at a low density, it is important to note that as the oxygen consumption rate was not measured in these studies, it is possible that peri-cellular oxygen depletion in the normoxic cell population could create hypoxic conditions, which would make the expression profile of normoxic cells more similar to hypoxic cells. Orthotopic Model Development An important finding of this study was that the protein expression profile of both tumor xenografts developed from normoxic cells and xenografts developed from hypoxic cells changed as the tumor size increased. The 100 mm 3 xenografts established from hypoxic cells were clearly distinguishable from the 100 mm 3 xenografts established from normoxic cells, but these differences decreased as the tumor size increased. The elevated level of HIF-1α in the perimeter of 100 mm 3 tumors developed from hypoxic cells is less likely due to lower oxygen relative to the core (as it is probable that the perimeter is more vascularized) and more likely due to oxygen-independent factors, cell stressors, oncogenes, and tumor suppressors that characterize cancer and up-regulate HIF. The elevated level of Pgp in the perimeter of tumors (100 mm 3 ) developed from hypoxic cells relative to the core may be due to the increased expression of HIF, as Pgp is transcriptionally activated by HIF, or may be a spatial response to microenvironmental factors as it is these cells in the perimeter of a tumor that are most likely to have the first contact with therapeutic agents. The decreased EGFR expression in the perimeter of tumors established from normoxic cells relative to the core may be because the core of the tumor is more starved for growth factor activation, and hence increases EGFR in an attempt to be more susceptible to growth factors. Alternatively, the higher level of GLUT-1 in the perimeter of tumors derived from normoxic cells relative to the core may be due to a positive feedback loop with available glucose. Collectively, the IHC analysis of the tumor cores and perimeters (as well as the growth curves) illustrate the more pronounced MDR character of the 100 mm 3 tumors established from hypoxic cells relative to 100 mm 3 tumors developed from normoxic cells. Hypoxic pre-conditioning of the tumor cells may result in more aggressive, MDR xenografts due to the upregulation of HIF and subsequent activation of HIF targets (such as growth factor receptors, MDR efflux pumps, and glycolytic proteins); this was demonstrated by the western blot data in this study. This activation may provide these cells with an initial growth advantage during xenografting establishment, as demonstrated by the tumor growth curves. The hypoxic pre-conditioning may activate the HIF cascade and MDR character in the cells, enabling initial, rapid growth of the xenografted cells before the establishment of profuse tumor vasculature, whereas the xenografts established from normoxic cells may not exhibit aggressive phenotypes until the angiogenic process is complete. This correlates with the expression of CD-31 (angiogenic marker), as the only substantial CD-31 expression was seen in the tumor sections of 500 mm 3 xenografts established from normoxic cells. Expression of CD-31 in the 500 mm 3 xenografts established from normoxic cells coincided with an increase in expression of HIF and all down-stream targets of HIF associated with MDR, similar to the expression of HIF, Pgp, EGFR, HXK2, and GLUT-1 in the the 500 mm 3 xenografts established from hypoxic cells. Angiogenesis has long been associated with primary tumor growth and metastasis, but recent evaluations of anti-angiogenic therapy reveal that anti-angiogenic therapies can actually increase metastasis and invasion [53][54][55]. These studies suggest hypoxic tolerance as a possible mechanism for this increased invasion and metastasis [54]. Hypoxic pre-conditioning may elicit a similar cellular response as is seen with anti-angiogenic therapy, resulting in aggressive disease. Exploiting the Pasteur effect by using hypoxic preconditioning is a useful technique for creating MDR tumor xenografts. We have developed an animal model of MDR breast cancer using hypoxic MDA-MB-231 cells. Although this approach of using hypoxic cells for xenografts will not be effective for every type of cancer, as demonstrated with the OVCAR5 cells, it will undoubtedly be applicable to other cancer cell lines. This approach could dramatically expand the repertoire of MDR cancer models and could be used to study the Pasteur and Warburg effects in orthotopic models. Using this method of hypoxic pre-conditioning could enable the rapid in vivo study of many different types of MDR cancer, which would not otherwise be possible. Conclusions Hypoxia alone is sufficient in transforming normal cancer cells into MDR cancer. Hypoxia also induces glycolysis in some cancer cells (the Pasteur effect) further increasing their survival advantage. Increased expression of MDR and glycolytic proteins in response to hypoxia is cell-type and duration dependent. An animal model was developed for orthoptopic MDR breast cancer using hypoxic MDA-MB-231 cells in female athymic mice. Immunohistochemistry of the tumor model demonstrated that the tumors derived from hypoxic cells had higher MDR character relative to tumors derived from normoxic cells. This character was maintained in the tumor perimeter and core. The tumors established from hypoxic cells also demonstrated more aggressive growth relative to tumors established from normoxic cells. This method of hypoxic pre-conditioning could be used to develop a multitude of orthotopic animal models of MDR cancer, further expanding the study of the disease.
9,279.2
2011-02-14T00:00:00.000
[ "Biology", "Medicine" ]
MT-YOLOv5: Mobile terminal table detection model based on YOLOv5 Table detection is an important task of optical character recognition(OCR). At present, table detection for desktop applications has basically reached commercial requirements. With the advancement of informatization, personal demand for table detection has gradually increased. There is an urgent need to establish a table detection method that can be deployed on handheld devices. This paper proposes a mobile terminal table detection model based on YOLOv5. First, we used YOLOv5 as the main framework of the model. However, considering the problem of connection redundancy in the backbone of YOLOv5, on the basis of retaining the YOLOv5 multi-scale detection head, we replaced the backbone of YOLOv5 with the same excellent Mobilenetv2. In addition, considering the non-linear defects of the lightweight model, we use deformable convolution to make up for it. This paper has been evaluated on the ICDAR 2019 dataset, and the results show that compared with the baseline model, the model reduces the number of parameters by half and increases the detection speed by 47%. At the same time, the model can reach 35.25 FPS on ordinary Android phones. Introduction is to process the image and locate the regions of the table. With the rapid development of the economy, structured data information also presents explosive growth. As a very important way to organize structured data, tables have been widely used in common scenarios such as corporate bills, financial documents and government documents because of simplicity and visualization [1][2][3]. Table detection is very popular. And it can be used to facilitate subsequent OCR tasks. Now many table detection and recognition tasks have achieved excellent results. The results are similar to the manual detection results on many datasets. For example, the Decnt model proposed by Siddiqui et al. [4] is 99% accuracy on the ICDAR 2013 competition dataset [5]. And the YOLOV3 proposed by He et al. [6] is 97% accuracy on the ICDAR 2017 competition dataset [7]. But these methods need to be deployed with the help of a server. To reduce the computational complexity and the consumption of computer hardware resources, and to facilitate users, it is of certain significance to study a fast table detection method that works on the handheld mobile terminal. At present, the table processing methods working on the mobile terminal does not build a table detection model, but only recognize the table data. The process is shown in Figure 1. In these methods, users must manually clip the table, and then upload it to the server to detect the contents of the table. This makes it very inconvenient to use. Therefore, this paper proposes a lightweight table detection method on the mobile terminal to solve this problem. This paper thinks that the table detection model on the desktop side has reached state-of-the-arts. However, these state-of-the-arts models cannot be directly applied to the mobile terminal due to As far as known, YOLOv5 [8] is currently the most widely used detection model. YOLOv5 has reached a milestone in detection tasks. It is widely used in target detection. However, "Condensenet" written by Huang et al. [9] demonstrated that dense connectivity introduces redundancies in the DenseNet section, which is the backbone of YOLOv5. This problem is particularly prominent when YOLOv5 is used as a lightweight model. Replacing DenseNet with a more efficient network is a better method. Based on extensive literature review and experiments, we find that Mobilenetv2 [10] is a better alternative. Mobilenetv2 applies many depthwise separable convolutions to avoid invalid links in the network and ensure network performance. Besides, improving the network backbone with Mobilenetv2 can increase its adaptability on the mobile terminal. Therefore, this paper proposes a solution that combines the overall framework of YOLOv5 with the backbone network of Mobilenetv2. Otherwise, in the lightweight model, if the nonlinear fitting ability of the model is poor, the overall performance of the model will be affected. To make up for this defect and improve the nonlinear fitting ability, we introduce deformable convolution [11] into the multi-scale detection head of the model, hoping to improve the nonlinear fitting ability of the model. To sum up, this paper proposes a lightweight table detection method combining the overall framework of YOLOv5 with the backbone network of Mobilenetv2 and deformable convolution. This paper presents table detection on the mobile terminal base on YOLOv5 for the first time(MT-YOLOv5). Contributions to this article are as follows: 1. As far as we know, MT-YOLOv5 is the first table detection model for edge devices proposed by us, which has achieved a good balance between response speed and model accuracy. 2. We replaced the backbone of YOLOv5 with the same excellent Mobilenetv2 to reduce the number of parameters and optimize the invalid connections, making the model easier to deploy on the mobile terminal. 3. To improve the nonlinear fitting ability of the lightweight model, we add deformable convolution into the multi-scale detection head of the model. Related Work The traditional table detection method usually uses the features of the table itself (such as line segment, texture, separator, etc.) to analyze the position of the table. Thomas et al. [12] proposed a T-RECS system for identifying table structures in electronic or paper documents. First, a random word is selected as a seed and the word's block is recursively expanded to all vertically adjacent neighbour blocks.Some post-processing methods have also been added to handle isolated words or other errors. Cesarini et al. [1] proposed a method based on MXY trees to determine the existence of tables by searching parallel lines. Besides, located tables can be merged based on [3] proposed pdf2table to analyze text blocks in PDF documents and identify table regions according to the rules between these text blocks. But this approach is susceptible to header or footer noise and can lead to misjudgments. Hao et al. [13] obtained candidate regions by roughly screening location regions based on rules, and then determined whether these regions were tables by using CNN or CRF. Different from the above methods, Jing Fang et al. [14] used document layout analysis to detect table position. They proposed a method based on the visual separator and geometric content layout information to analyze page whitespace to obtain the number of page columns, and then detect table position through graphic lines and separator, etc. These approaches are usually rules-based and have an advantage in speed, but they all rely heavily on specific datasets and lack generalization performance. With the development of deep learning, the deep learning model based on the convolutional neural network has gradually returned to people's vision, generating many classic target detection models, such as Faster RCNN [15], YOLO [16], and so on. Some workers combine the characteristics of tables with these excellent target detection models and propose a table detection model based on deep learning [17][18][19]. These models have the best performance on many classic datasets. Combined with the Faster RCNN model, Gilani et al. [17] applied deep learning to the field of table detection for the first time. They proposed that the table image should be firstly linked and analyzed through pre-processing, and the processed image should be put into the model for preliminary detection, and then the results should be optimized through a fixed post-processing method. Due to the variety of table styles, the algorithm is not accurate enough. On this basis, Sun et al. [18] proposed the corner point-based Faster RCNN model. They further refined the features of the table and proposed to locate the position of the table. The four corners of the table are located through another branch to further improve the accuracy of positioning. Their method can still achieve high accuracy even when IOU is large. While the corner approach can help with positioning accuracy, it's difficult to visualize the corner in a wide variety of formats. For example, there are no corners on some open tables. Besides, Huang et al. [6] proposed a tabular detection algorithm based on Yolov3. They proposed a new Anchor optimization strategy and three post-processing methods. Using these methods, Huang et al. achieved the best performance to date on the ICDAR 2017 dataset. However, because the specific post-processing method is only effective for a certain data set, their method has a lack of performance on the ICDAR 2013 dataset. Recently, Prasad et al. [19] explored a method to extract required knowledge from a small amount of data based on transfer learning based on Mask RCNN. In addition to directly utilizing the intuitive style features of the table and the deep learning model, some workers also directly study how to utilize the advantages of the deep model itself to optimize the detection ability of various style sheets. Besides, researchers have proposed some novel algorithms. Riba et al. [20], combined with a graph neural network, proposed a model to detect repeated join information in tables. The method does not consider fixed table rows and columns, but captures visual continuity in horizontal or vertical directions and determines the position of the table. But this method can only achieve good results on some specific report data sets. The above-mentioned methods based on deep learning all need to use a huge amount of parameters, and are lacking in real-time reasoning speed. Model framework The model architecture proposed in this paper can be seen in Figure 2. The left side is the backbone network composed of multiple bottleneck structures, and the right side is the multi-scale detection head. The input of the model is a three-channel image, 'conv' represents the standard convolution operation, and 'bottleeck' represents the bottleneck structure in Mobilenetv2, 'CSP' represents the CSP structure proposed by Wang et al [21], 'upsampling' represents the up-sampling operation, 'cat' in the figure represents splicing by channel. The output bounding box is a multi-group prediction result (x, y, w, h, conf). This framework uses the Mobilebetv2 backbone network, combined with YOLOv5's multi-scale detection head. Specifically, we first use a convolutional layer with a convolution kernel of 3, then a bottleneck layer for different channels, and finally a detection head with a CSP structure. As shown in Figure 2, '(32,3)' means that the output is a 32-channel bottleneck layer, and the number of repetitions is 3. The number of parameters in this model is only a 3.6million. The overall workflow of our work is shown in Figure 3. Deformable convolutional The table usually presents a form with a larger width and a smaller height. Using a neural network to capture such morphological features will be conducive to the sensitivity of the model. Traditional convolutional networks usually use convolutional kernels of size 33 or 55, but they are not suitable for objects such as tables, which usually appear as rectangles. The traditional convolution operation is defined as follows: ( , ) ( , ) In the convolutional layer of the network, the convolutional kernel shape of the connection between all levels is the same. This is problematic in-depth models, where different tables can appear in arbitrary shapes and sizes. Accurate perception of these tables requires the ability to dynamically adjust the receptive fields of the convolutional layer. Therefore, we use a deformed CNN [11] to replace the traditional CNN in the CSP part of the MT-YOLOv5 model. Figure 3 shows the basic process of deformable convolution. Its mathematical representation is as follows: Figure 4, it can be seen that the original convolution kernel will be offset in different directions. In actual training, it will automatically fit according to the shape of the dataset We evaluated our method on ICDAR 2019 Competition on Table Detection and Recognition (cTDaR) [22]. ICDAR 2019 cTDaR contains 1200 training data and 439 test data. The dataset consists of handwritten documents, scanned documents, and some financial statements. The ICDAR 2019 cTDaR datasets are in two styles: Modern and Archival. 'Modern' is a modern document style, and 'Archival' is a handwritten document style. Figure 5 shows both styles of documents. After unifying the pictures in different formats into JPG, we corrected some negative label information and converted it into a VOC competition data format. We used 1200 pieces of data as training samples and tested on 439 pieces of data. Evaluation index of experimental details Our experiments were conducted in a pytorch 1.7.0 environment with GTX TitanXP training equipment. The initial learning rate is 4 e  , the learning rate was reduced to 5 e  in the 100th iteration. The model trained a total of 500 arguments. The classic evaluation index of Intersection Over Union (IOU) was adopted in all the three datasets. We define the loss function as follows: i S represents the table area predicted by the model, and j S represents the ground truth. The sample with IOU greater than a certain value is denoted as positive sample true positive (TP) sample. Samples whose IOU is less than a certain value are labeled as negative sample false positive (FP) samples. Follows many evaluation methods, we use precision rate (P), recall rate (R) and F1 value evaluate our models. In addition, FPS(Frames Per Second) represents the number of responding per second of the model. Table 1 is the evaluation result of our method on the ICDAR2019 competition dataset. The IOU value we adopted was 0.8. In the table, 'Pre-process' represents the pre-processing of input images, and 'Post-process' represents the post-processing of model output results. 'Params' represents the number of model parameters in millions(M). As can be seen from Table 1, compared with the baseline model, the MT-YOLOv5 reduces the number of parameters by nearly half but loses a little precision. Compared with other models with the same number of parameters, MT-YOLOv5 has obvious advantages. Besides, the model proposed in this paper only uses 3.6M of parameters. This is 1/40 of the number of participants used by the ICDAR2019 competition team. The results show that the method presented in this paper is similar to, or even better than these competition methods. Experimental results and analysis In conclusion, we have a distinct advantage over lightweight models of the same level, and we have a distinct advantage over traditional methods in terms of speed. We evaluate the responding speed of our model and the baseline model on different devices. We conduct responding speed experiments on commonly used mobile devices and desktop devices. The experimental results can be seen in Table 3. 439 test images from the ICDAR2019 dataset are used as samples. We resize the images to 320 pixels and 128 pixels. The responding speed of all the images is averaged to get the final FPS. Table 3 shows that the responding speed of the MT-YOLOv5 model is faster than 24FPS in the mobile terminal. This also means that real-time detection can be achieved. The model in this paper can omit the manual clipping step. On the mobile terminal, users don't need to manually clip out the table region of the image. They can extract the table directly with the model and occasionally fine-tune the table. This will speed up detection. Figure 6 shows some table detection examples of our model. Figure 6. MT-YOLOV5 table detection example. Conclusion In this paper, we propose a mobile terminal table detection model named MT-YOLOv5 based on YOLOv5, which can detect the table position in real-time on the mobile terminal. MT-YOLOv5 replacing the backbone network of YOLOv5 with MobileNetv2. Besides, we have added a deformable convolution module to enhance its nonlinear perception ability. As far as we know, this is the first work to table detect on the mobile terminal. The experimental results show that MT-YOLOV5 can achieve almost the same performance as the original model with less than half of the number of parameters.
3,642.8
2021-07-01T00:00:00.000
[ "Computer Science" ]
SPIRou reveals unusually strong magnetic fields of slowly rotating M dwarfs In this paper, we study six slowly rotating mid-to-late M~dwarfs (rotation period $P_{\mathrm{rot}} \approx 40-190\,\mathrm{dy}$) by analysing spectropolarimetric data collected with SPIRou at the Canada-France-Hawaii Telescope as part of the SPIRou Legacy Survey from 2019 to 2022. From $\approx$100--200 Least-Squares-Deconvolved (LSD) profiles of circularly polarised spectra of each star, we confirm the stellar rotation periods of the six M~dwarfs and explore their large-scale magnetic field topology and its evolution with time using both the method based on Principal Component Analysis (PCA) proposed recently and Zeeman-Doppler Imaging. All M~dwarfs show large-scale field variations on the time-scale of their rotation periods, directly seen from the circularly polarised LSD profiles using the PCA method. We detect a magnetic polarity reversal for the fully-convective M~dwarf GJ~1151, and a possible inversion in progress for Gl~905. The four fully-convective M~dwarfs of our small sample (Gl~905, GJ~1289, GJ~1151, GJ~1286) show a larger amount of temporal variations (mainly in field strength and axisymmetry) than the two partly-convective ones (Gl~617B, Gl~408). Surprisingly, the six M~dwarfs show large-scale field strengths in the range between 20 to 200\,G similar to those of M~dwarfs rotating significantly faster. Our findings imply that the large-scale fields of very slowly rotating M~dwarfs are likely generated through dynamo processes operating in a different regime than those of the faster rotators that have been magnetically characterized so far. INTRODUCTION M dwarfs are known to host strong magnetic fields with large-and small-scale field strengths that may exceed 1 kG (Morin et al. 2010;Kochukhov 2021).Zeeman-Doppler-Imaging (ZDI, Donati & Brown 1997;Donati et al. 2006) revealed different types of large-scale field topologies for M dwarfs: the partly convective early M dwarfs usually showing more complex, relatively weaker fields with nonaxisymmetric poloidal fields and significant toroidal components (Donati et al. 2008).Mid M dwarfs often display simpler and stronger, mainly poloidal and axisymmetric large-scale fields (Morin et al. 2008) whereas the fully-convective late M dwarfs of lowest masses may end up showing large-scale fields in either configuration, (Morin et al. 2010, see also the review by Donati & Landstreet 2009). Besides, magnetic activity (diagnosed by various proxies) increases for shorter rotation periods until it saturates, i.e. no longer increases with decreasing rotation periods (see e.g.Saar 1996;Wright et al. 2011).In the unsaturated regime, both large-and small-scale fields, diagnosed by polarised and unpolarised Zeeman signatures on ★ E-mail<EMAIL_ADDRESS>profiles, increase with decreasing rotation periods (Vidotto et al. 2014;See et al. 2015;Reiners et al. 2022). The main parameter that describes magnetic fields and activity of most M dwarfs is found to be the Rossby number , equal to the rotation period divided by the convective turnover time (e.g.Noyes et al. 1984;Wright et al. 2018 and references therein), with magnetic fields and activity increasing with decreasing until saturation occurs at ∼ 0.1 and below.Whereas large-scale fields of M dwarfs featuring < 1 have been extensively studied, very little is known about those of very slow rotators with ∼ 1 or larger. In this paper, we explore large-scale fields of a small sample of very slowly rotating M dwarfs, whose fields and rotation periods were inaccessible to optical instruments.The six M dwarfs were observed with the SpectroPolarimetre InfraRouge (SPIRou), the nearinfrared spectropolarimeter and velocimeter recently mounted on the Canada-France-Hawaii Telescope (CFHT), in the framework of the SPIRou Legacy Survey (SLS, Donati et al. 2020).The SLS is a Large Programme carried out with SPIRou at CFHT from early 2019 to mid 2022, with a 310-night time allocation spread over this period.The two main goals of the SLS are (i) the search for habitable Earth-like planets around very-low-mass stars and (ii) the study of low-mass star and planet formation in the presence of magnetic fields.Its long timeframe (of 7 semesters) enables us to investigate the temporal variability in the time series of the monitored M dwarfs, and in particular to independently estimate rotation periods of up to a few hundred days and to study the temporal evolution of their large-scale magnetic fields, (Fouqué et al. 2023;Bellotti et al. 2023;Donati et al. 2023, hereinafter D23). In Section 2 we will present the details of the observations and targets.To analyse the magnetic field properties and to determine the stellar rotation period, we use different methods explained in Section 3, before we present our results for each M dwarf individually in Sec.4-9.We conclude and discuss our results in Section 10. SPIROU OBSERVATIONS We analyse here a total of 986 circularly polarised spectra collected with SPIRou.The spectra span a wavelength range of 0.95-2.5m in the near-infrared with a resolving power of = 70 000.Further details about SPIRou and its spectropolarimetric capabilities can be found in Donati et al. (2020).To process the data, we used the new version of Libre ESpRIT, i.e., the nominal reduction pipeline of ES-PaDOnS at CFHT optimised for spectropolarimetry and specifically adapted for SPIRou (Donati et al. 2020). We applied Least-Squares Deconvolution (LSD, Donati et al. 1997) to all reduced unpolarised (Stokes ) and circularly-polarised (Stokes ) spectra using a M3 mask constructed from outputs of the VALD-3 database (Ryabchikova et al. 2015) assuming a temperature eff = 3500 K, a logarithmic surface gravity log = 5 and a solar metallicity [M/H].Although our 6 stars do not have the exact same atmospheric properties (see Table 1), we nonetheless used a single mask, whose impact on the LSD results is only marginal, especially in the near-infrared domain where the synthetic spectra only provide a rough match to observed ones (e.g., Cristofari et al. 2022).Besides, the mask we chose corresponds to the coolest atmospheric model available by default on the VALD-3 data base.We have selected the atomic lines with a relative depth greater than 10% and resulting in 575 lines for the mask.For further details see D23, Sec. 2. The ephemeris used to calculate the phase and rotation cycle in this paper are summarised in Tab.A1 for all targets of our sample. The six M dwarfs studied in this paper are part of the 43 star sample analysed by Fouqué et al. (2023) and D23.These two papers aimed at determining, whenever possible, the rotation periods of the sample targets, by applying quasi-periodic Gaussain-Process-Regression (GPR) to times series of their longitudinal fields ℓ , i.e., the line-of-sight-projected component of the vector magnetic field averaged over the visible stellar hemisphere.All six stars of our sample have well identified rotation periods according to D23, whereas Fouqué et al. (2023), using data reduced with the nominal SPIRou pipeline APERO (optimised for RV precision, Cook et al. 2022) was able to derive rotation periods for four of them (consistent with those of D23). The key stellar parameters of our sample are presented in Table 1 and are mostly extracted from Cristofari et al. (2022) who studied the fundamental parameters of these stars by analysing their SPIRou spectra. MODEL DESCRIPTION To analyse the magnetic properties of our six slowly rotating M dwarfs, we use both the method based on Principal Component Analysis (PCA) recently proposed by Lehmann & Donati (2022), as well as Zeeman-Doppler Imaging (ZDI, Donati & Brown 1997;Donati et al. 2006), applied to our set of LSD Stokes profiles.Lehmann & Donati (2022) proposed a method to retrieve key information about the large-scale stellar magnetic field directly from time series of Stokes LSD profiles without the need of an elaborate model of the field topology or several stellar parameters (e.g., the projected equatorial velocity sin or the inclination of the stellar rotation axis ).The method provides information about the axisymmetry, the poloidal / toroidal fraction of the axisymmetric component, the field complexity and their evolution with time. PCA analysis of the LSD Stokes 𝑉 profiles One first determines the mean profile of the whole Stokes time series, which stores information about the axisymmetric component of the large-scale field.In contrast to Lehmann & Donati (2022), we use the weighted mean profile providing better results for time series such as ours, where all LSD profiles do not have the same SNR.The averaged Stokes LSD profile can be decomposed into its antisymmetric component (with respect to the line centre), which is related to the poloidal component of the axisymmetric large-scale field, and its symmetric component (with respect to the line centre), which is probing the toroidal component of the axisymmetric largescale field (see e.g.Fig. 1a). To evaluate the non-axisymmetric field, we subtract the weighted mean profile (taken over all seasons) from the Stokes time series removing the signal of axisymmetric field.The resulting meansubtracted Stokes profiles store now the information about the non-axisymmetric component of the large-scale field and are analysed using a weighted PCA (Delchambre 2015) returning eigenvectors and coefficients.In the weighted PCA, the Stokes profiles are weighted by the squares of their SNRs, taking into account the different noise levels.Thus, for the long time series analysed in this paper, with uneven SNR over the 7 semesters, the weighted PCA gives better results than a classical PCA where all profiles are treated equally.The PCA coefficients, and in particular their fluctuation with time, can reveal the complexity of the large-scale field and its long-term temporal evolution. Given the long time range of the SLS data, we further split the Stokes time series at successive observing seasons into 2-3 seasons per star.To evaluate the evolution of the axisymmetric field from season to season, we determine the weighted mean profiles per season and compare them to one another (see e.g.Fig. 1c left column).To study the evolution of the non-axisymmetric field, we compare the coefficients of the different seasons (see e.g.Fig. 1c middle and right column).We caution that the coefficients are derived from the weighted PCA of the mean-subtracted Stokes time series using the weighted mean profile computed across all seasons (e.g.Fig. 1a) and not the weighted mean profile of each individual season (e.g.Fig. 1c left column).The usage of the weighted mean Stokes profiles of each individual season would prevent a direct comparison of the different seasons.For example, it would centre the coefficients for each season, so that we will lose the information if the nonaxisymmetric field becomes more or less positive / negative from one season to another, and also the amplitudes of the coefficients are no longer comparable.Further information about the PCA method can be found in Lehmann & Donati (2022). In addition, Lehmann & Donati (2022) showed that the sensitivity of the PCA method for toroidal fields decreases for low sin .As all our stars have sin ≤ 0.5 km s −1 , we are likely to miss large-scale toroidal fields.We provide a typical 1 error bar for the axisymmetric toroidal field for each star and each observing season of our sample.Cristofari et al. (2022) including spectral type, effective temperature eff , stellar mass, stellar radius ★ , metallicity [M/H], log .The rotation period rot is copied from D23.For the Rossby number = rot /, we use rot (8th column) and determine the convective turnover time via Wright et al. (2018, Eq. 6).In the last column, we give the projected equatorial velocity sin = 2 ★ rot sin , determined from ★ (5th column), rot (8th column) and an assumed inclination of = 60 • between the stellar rotation axis and the line-of-sight. Gaussian Process modelling of the time series We analyse the temporal evolution of the M dwarf's topology with the help of the PCA determined coefficients of the mean-substracted Stokes time series.For our slowly rotating stars, most of the time only the first eigenvector and therefore only the first coefficient shows a signal.To directly compare the temporal evolution of the coefficients with the result from the longitudinal field ℓ (presented by D23) for the individual stars, we scale and translate the first coefficient, which we call 1 , using a linear model (scaling factor and offset), that minimises the distance between the first coefficients and ℓ taking into account the measurement errors on ℓ .We can re-determine the stellar rotation period rot of our six M dwarfs using a quasi-periodic (QP) GPR fit to 1 allowing us a direct comparison with the QP GPR results of ℓ presented by D23. In contrast to D23, we use the python model presented by Martioli et al. (2022) based on george (Ambikasaran et al. 2015).Our adapted covariance function (or kernel) is given by where = − is the time difference between the observations and , is the amplitude of the Gaussian Process (GP), is the decay time describing the typical time-scale on which the modulation pattern evolves, is the smoothing factor indicating the harmonic complexity of the QP modulation (lower values indicating higher harmonic complexity) and rot is our new estimate of the stellar rotation period.The GP model parameters are fitted by maximising the following likelihood function L using the python package scipy.optimize: where K is the QP kernel covariance matrix, the diagonal variance matrix of 1 , S the diagonal matrix 2 I (with an added amount of uncorrelated white noise (Angus et al. 2018) and I the identity matrix), the number of observations and corresponds to 1 .The posterior distribution of the free parameters is sampled using a Bayesian Markov Chain Monte Carlo (MCMC) framework applying the package emcee (Foreman-Mackey et al. 2013).For the MCMC, we use 50 walkers, 200 burn-in samples and 1000 samples.Tab. 2 provides a summary of the results for all 1 GPR fits in this study. For three stars, the decay time was fixed as in D23 (see Tab. 2).Information about the assumed prior distributions and the posterior distributions for each parameter and each GPR fit can be found in appendix A. Furthermore, we applied the above GP model to the ℓ values of D23 (see appendix B).This allows a direct comparison of the GP results for the same ℓ dataset with our GP routine and the GP routine used by D23, as well as a comparison of the GP results for 1 and ℓ obtained by the same GP routine.In general, we find that 1 has lower RMS, often shows lower values and provides smaller errors for rot when the topology is not highly axisymmetric. Zeeman-Doppler-Imaging We determined the large-scale vector magnetic field at the surface of the six M dwarfs, for each season, using ZDI.ZDI iteratively builds up the large-scale magnetic field and compares the synthetic Stokes profiles corresponding to the current magnetic map, assuming solid body rotation, with the observed Stokes profiles until it converges on the requested reduced chi-square value 2 between the observed and synthetic data.The problem being ill-posed, i.e., with an infinite number of solutions featuring the requested agreement to the data, ZDI chooses the one with maximum entropy, (i.e., minimum information in our case, Skilling & Bryan 1984).The surface magnetic field is described with a spherical harmonics expansion given in Donati et al. (2006), where the ℓ, and ℓ, coefficients of the poloidal component are modified as indicated in Lehmann & Donati (2022, Eq. B1).To compute synthetic Stokes profiles, the stellar surface is decomposed into a grid of 1000 cells.For each cell the local Stokes and profiles are determined using Unno-Rachkovsky's analytical solution to the equations of the polarised radiative transfer in a plane-parallel Milne-Eddington atmosphere (Landi Degl 'Innocenti & Landolfi 2004).The Stokes profiles are integrated over the visible hemisphere for each observing phase applying a mean wavelength of 1700 nm and a Landé factor of 1.2. For the slowly rotating M dwarfs of our sample, we see no obvious variations in the Stokes LSD profiles beyond those attributable to radial velocity variations, so that we only use the Stokes profiles for determining the magnetic field map via ZDI.Nevertheless, we make sure that the synthetic Stokes profiles computed with ZDI agree well with the averaged observed Stokes profile, especially in terms of width and depth.We assumed a fraction of each grid cell, which actually contributes to the Stokes profile.This fraction is called filling factor of the large-scale field and is set equally to all cells, see also Morin et al. (2010); Kochukhov (2021).For each star, we set = 0.1 motivated by the results of Klein et al. (2021) for the slow rotator Proxima Centauri and the results of Moutou et al. (2017) for the SPIRou sample.We confirm the choice of = 0.1 by finding lower 2 values with = 0.1 compared to = 1 for each season of the different M dwarfs.The filling factor for the Stokes profiles is set to = 1.0 in consistency with the literature, (Morin et al. 2010;Kochukhov 2021).The sin of our sample is ≤ 0.5 km s −1 (see Tab. 1) and prevents us from reliably determining Table 2. Summary of the best-fitting parameters of the QP GPR fits applied to 1 for the six M dwarfs in our sample, where rms is the root-mean-square of the residuals and 2 the reduced chi-square value of the GPR fit.Fixed parameters are shown in italics.A comparison with the results of the GPR fit to the ℓ data, and with those of D23, is given in Table B1.star rotation period decay time smoothing factor amplitude white noise rms Gl 905 111.7 the inclination of the stellar rotation axis for the M dwarfs, so that we set the inclination to 60 • for all M dwarfs.This is motivated by the steep modulation patterns seen for ℓ and 1 for most targets, that can not be obtained for pole-on viewed stars.Another reason is that higher values of are intrinsically more likely than smaller ones.We restrict the spherical harmonics of the ZDI reconstructions to ℓ = 7, as we see little magnetic energy stored in ℓ ∼ 5 − 7. For our analysis we use 219 Stokes LSD profiles observed with SPIRou between 2019 Apr and 2022 June and split the data in three seasons (2019 Apr -Dec, 2020 May -2021 Jan, 2021 June -2022 Jan) for the per-season analysis.The 15 profiles collected in 2022 June at the beginning of a new season, only covering 9% of a rotation cycle, were left out of the per-season analysis. PCA analysis of Gl 905 First, we investigate the large-scale field topology using PCA (Lehmann & Donati 2022).The weighted mean profile of all Stokes profiles is antisymmetric with respect to the line centre indicating a poloidal axisymmetric large-scale field (see Fig. 1a).The symmetric component of Gl 905's mean profile exceeds the noise level ( 2 = 1.4) but is likely due to an uneven phase coverage in the season 2020/21 (see Sec. 4.2).In Fig. 1b, we show the first two eigenvectors of the mean-subtracted Stokes profiles allowing the analysis of the non-axisymmetric field.Only the first eigenvector shows an antisymmetric signal with respect to the line centre.All other eigenvectors display noise. In Fig. 2 top, we plot the temporal evolution of 1 , which appears very similar to the temporal evolution of ℓ determined with our GP model (see Fig. 2 bottom) and to D23's results (see Fig. A12 middle in D23).When only one eigenvector is significant, as it is the case here for Gl 905, the 1 curve mimics that of ℓ (Lehmann & Donati 2022). Our QP GPR model of 1 finds a rotation period of rot = 111.In Fig. 1c, we plot the mean profiles (left column) and the phasefolded coefficient curves colour-coded by rotation phase (middle and right columns) for the three seasons (one season per row).They exhibit large changes in the large-scale field topology from season to season, allowing us to draw first conclusions about the field evolution of Gl 905.We recall that the coefficients for all three seasons are computed from the mean-subtracted Stokes profiles, using the weighted mean derived from the full data set, and not from the profiles of each season.The same applies to the five M dwarfs discussed in Secs.5-9. The mean profiles of the first two seasons (2019 and 2020/21) are antisymmetric with respect to the line centre and indicate a mostly poloidal axisymmetric field although the symmetric component is larger for 2020/21 (see Fig. 1c).This may reflect an increasing toroidal field but is more likely due to the uneven phase coverage of this season (with more than 75% of the observations concentrating between phase 0.3 and 0.75). For the first season 2019, 1 features a roughly sinusoidal behaviour indicating a mainly dipolar configuration.For 2020/21, 1 appears more complex than for 2019 implying that the field becomes more complex, too.For the last season 2021/22, the topology changes more drastically: the mean profile is close to zero indicating a much lower axisymmetric component than before.The phase at which 1 reaches its maximum is shifted, with 1 being more positive now, while it is mainly negative before.Considering the sign of the mean profile and the eigenvector, this suggest that the main polarity of the large-scale field is evolving from a predominantly negative polarity to a positive polarity.We can conclude from the PCA analysis, that the large-scale field topology becomes more complex from 2019 to 2020/21 before it becomes mostly non-axisymmetric and possibly initiates a polarity reversal. ZDI reconstructions of Gl 905 We conclude our analysis by deriving vector magnetic field maps for Gl 905 using ZDI, for each of the three main observing seasons.The maps are shown in Fig. 3 and their magnetic properties are summarised in Table 3.We were able to fit all three ZDI maps down to 2 ≈ 1.0 assuming The ZDI maps confirm the conclusions we derived from the PCA analysis.The topology gets indeed more complex from 2019 to 2020/21 and the degree of axisymmetry decreases from around 70% to 4% for the last season 2021/22.The surface mean magnetic field decreases from 128 G to 64 G.Most prominent is the hint of an ongoing polarity reversal from negative to positive radial field taking place in the last season. To test whether the symmetric component of the mean profile for season 2020/21 indeed results from an uneven phase coverage, we simulate 24 evenly phased Stokes LSD profiles from the 2020/21 ZDI map (see Fig. 3 middle column) and determine the corresponding mean profile and its symmetric and antisymmetric components (see Fig. C1).The symmetric component disappears with even phase sampling, confirming that the mean LSD profile provides no observational hint for a large-scale axisymmetric toroidal field at the surface of Gl 905. The reconstructed surface averaged toroidal field ⟨ tor ⟩ is lower than 10 G for each season.To derive a 1 error bar on the simplest possible large-scale axisymmetric toroidal field (described with spherical harmonics coefficients ℓ = 1 and = 0) at the stellar surface, we proceed in the following way: (1) artificially add an axisymmetric toroidal field of strength ⟨ tor ⟩ to the reconstructed ZDI map, (2) simulate the corresponding Stokes LSD profiles with the phase coverage and SNR of the actual observations and (3) compute the new 2 with respect to the observed LSD profiles and raise ⟨ tor ⟩ until 2 is increased by 1 with respect to the optimal ZDI fit.We find 1 error bars ranging from 180 G in 2019 down to 55 G in 2020/21. GJ 1289 The fully convective M dwarf GJ 1289 ( = 0.21 ± 0.02 M ⊙ , Cristofari et al. 2022) is the next star in our sample.SPIRou observed GJ 1289 from 2019 Sept until 2022 June providing a time series of 204 LSD profiles split into three seasons (2019 June -Dec, 2020 May -2021 Jan, 2021 June -2022 Jan) for the per-season analysis.As for Gl 905, the 14 profiles collected in 2022 June at the beginning of a new season, only covering 16% of a rotation cycle, were left out of the per-season analysis. PCA analysis of GJ 1289 The mean profile is perfectly antisymmetric with respect to the line centre indicating a dominant axisymmetric poloidal component (see Fig. 4a).Both the first and second eigenvectors are found to be antisymmetric with respect to the line centre, which is a strong hint of a non-axisymmetric poloidal component (see Fig. 4b).All further eigenvectors trace noise. The QP GPR fit applied to 1 (see Fig.In Fig. 4c, we show the mean profile and the phase-folded coefficients split by season.Comparing the mean profiles of the three seasons, the axisymmetric component grows in amplitude and stays always poloidal.As the amplitude of the coefficients increases as well, the magnetic field becomes in general stronger. The phase-folded coefficient curves indicate a rapidly evolving and complex large-scale field, as we see variations from one rotation cycle to the next (see season 2020/21) and trends that are more complex than sine waves (e.g. for 2019 and 2020/21).Furthermore, Table 3. Magnetic properties of Gl 905 extracted from the ZDI maps per season: the start and end month of the observations used for the ZDI maps, the surface averaged unsigned magnetic field ⟨ ⟩[G], the surface average unsigned dipole magnetic field ⟨ dip ⟩[G], the typical 1 error bar on the ZDI reconstructed surface averaged toroidal field ⟨tor⟩ , the fractional energy of the poloidal field, the fractional energy of the axisymmetric field (only = 0 modes), the fractional energy of the dipole component, the tilt angle of the dipole (ℓ = 1) with respect to the negative pole, the phase at which the dipole field faces the observer, the reduced 2 values for the Stokes profiles ( 2 , ; corresponding to a B = 0 G fit), for the ZDI fit of the Stokes profiles ( 2 ,,ZDI ) and for the Null profiles ( season 2020/21 stands out, with the second eigenvector contributing significantly to the Stokes signal before disappearing again for the last season 2021/22, for which 1 shows a simpler trend. ZDI reconstructions of GJ 1289 We were able to fit the Stokes profiles for all seasons down to 2 ≈ 1.0 assuming rot = 73.66d, sin = 0.14 km s −1 , = 60 • , = 0.1.In the first season, the data set only includes about half of the observations of those from the two other seasons and the achieved 2 , = 1.46 is several times lower than the 2 , of the following seasons (see Tab. 4).In the first season, ZDI reveals a weak marginally complex field topology.In 2020/21, the surface averaged field ⟨ ⟩ becomes twice as large due to a growing axisymmetric poloidal dipole and ZDI reconstructs a more complex azimuthal field, featuring a quadrupolar non-axisymmetric azimuthal structure (see Fig. 6).In the last season 2021/22, the dipole tilts more strongly to 39 • and dominates the field topology (see Tab. 4).The toroidal field of the ZDI maps varies between 7 − 25 G, which is again lower than the typical 1 error bar that we derive ranging from 40 to 100 G. PCA analysis of GJ 1151 The mean profile is close to zero indicating a strongly nonaxisymmetric topology (see Fig. 7a).Only the first eigenvector of the mean-subtracted Stokes profiles significantly differs from the noise and features an antisymmetric signal with respect to the line centre (see Fig. 7b). The QP GPR model fits 1 down to a 2 = 0.99 (see Fig. 8 top).We fix the decay time to 300 d similar to D23 and find a rot = The mean profile of the first two seasons is antisymmetric with respect to the line centre (axisymmetric poloidal field) and is relatively weak (see Fig. 7c).The coefficient 1 shows no obvious trend with phase for the first season 2019/20 and just start to display a weak variation with phase for 2020/21.The low amplitude of the mean profiles and coefficients indicate that the magnetic field must be very weak during the first two seasons.For the last season 2021/22, the amplitude of the mean profile is twice as high as before and also 1 shows a higher amplitude ( 2 = 7.9), indicating that the magnetic field increases significantly for 2021/22.We also notice that the sign of the mean profile (and therefore the projected main polarity of the large-scale magnetic field) changed from negative to positive for the last season, hence why the mean profile over the whole time series is close to zero (see Fig. 7a). For the first two seasons, the Stokes profiles are weak (see Fig. C6) and so is the reconstructed field, with a dominant negative polarity in the upper hemisphere that is consistent with the corresponding mean profiles.We see a small increase of ⟨ ⟩ in the second season of 2020/21 explaining the higher amplitude of 1 seen in the PCA analysis (see Fig. 7c).For the last season 2021/22, the ZDI map shows a strongly tilted dipole (tilt angle = 55 • ), that flipped polarity, and the surface averaged field is more than twice as high as before (⟨ ⟩ = 63 G).To the best of our knowledge this is the first polarity reversal seen in the vector magnetic field map of an M dwarf. The reconstructed toroidal field ⟨ tor ⟩ is lower than 8 G for all seasons of GJ 1151 and we find that the 1 error bar on the toroidal field ranges between 370 and 450 G. PCA analysis of GJ 1286 The mean profile of GJ 1286 is antisymmetric with respect to the line centre and appears more noisy than usual but clearly indicates a purely axisymmetric poloidal field (see Fig. 10a).The first eigenvector has an antisymmetric shape, too, and is the only one that emerges from the noise (see Fig. 10b). The best-fitting model of the QP GPR for 1 finds a rot = 186.8+9.−13 d).The mean profile of season 2020 is nearly twice as high as for 2021 (see Fig. 10c).Also 1 shows a lower amplitude for 2021.We can therefore conclude, that the surface averaged field decreases for 2021.The field topology becomes simpler as the 1 curve gets less complex with phase for 2021. As concluded from the PCA analysis, the ZDI maps confirm that the topology becomes simpler and weaker: the fractional energy of the dipole component dip increases from 0.73 to 0.79, while ⟨ ⟩ decreases by almost half from 113 G to 71 G (see Fig. 12 and Tab. 6).The reconstructed toroidal field ⟨ tor ⟩ of the seasons are 9 G and 6 G, respectively, while the typical 1 error bars on the axisymmetric toroidal field are 325 G and 300 G, respectively. GL 617B Gl 617B (EW Dra, HIP 79762, LHS 3176) is a partly convective M dwarf with = 0.45 ± 0.02 M ⊙ (Cristofari et al. 2022) and was observed between 2019 Sept and 2022 June with SPIRou.Our following analysis is based on 144 LSD Stokes spectra, which we split into three seasons (2020 Feb -Oct, 2021 Jan -July, 2022 Mar -June) for the per-season analysis.As for the other stars, the first 15 spectra collected in 2019 were left out of the per-season analysis. PCA analysis of Gl 617B We find, that the mean profile is much larger than the mean-subtracted Stokes profiles indicating that the axisymmetric component of the magnetic field is dominant (see Fig. 13a).The mean profile is again antisymmetric with respect to the line centre, indicating an axisymmetric poloidal field.The first eigenvector is already very noisy and is the only one that shows a clear signal, confirming that the field is indeed dominantly axisymmetric (see Fig. 13b). The QP GPR model fits 1 down to a 2 = 0.66 finding a rotation period of 37.8 +8.The mean profile is antisymmetric to the line centre and therefore poloidal dominated for all three seasons, but varies in amplitude (see Fig. 13c, left column).Nonetheless, 1 traces a varying nonaxisymmetric component.Season 2020 shows the highest range in amplitude of 1 , indicating the largest dipole tilt angle of all three seasons, although it will still be small (< 20 • ) due to the predominantly axisymmetric topology of Gl 617B. ZDI reconstructions of Gl 617B We could fit Gl 617B down to 2 ≈ 1.0 assuming rot = 40.4d, = 0.1, sin = 0.50 km s −1 , = 60 • .The ZDI maps confirm a very axisymmetric, poloidal configuration.The axisymmetry is always equal to or greater than 97%, so variations of the non-axisymmetric field are difficult to see, but appear largest in 2021 (see Tab. 7).The data set in season 2020 shows the largest tilt angle (7 • ) as predicted by the PCA analysis.The reconstructed toroidal field ⟨ tor ⟩ reaches 9 G for 2020, 6 G for 2021 and 2 G for 2022, while the estimated 1 error bar on the axisymmetric toroidal field is about 7 G in 2020, 13 G in 2021 and 6 G in 2022, which is a lower uncertainty than for the other M dwarfs thanks to the higher sin = 0.5 km s −1 of Gl 617B. We see ⟨ ⟩ changing by approximately ±25 G for the three seasons, otherwise the main properties of the maps are similar (see Tab. 7). For highly axisymmetric topologies, it is difficult to infer the inclination .It may be that is actually lower for Gl 617B.We provide the ZDI maps for an inclination = 30 • and sin = 0.29 km s −1 , while otherwise using the same parameters (see Fig. C2).The 2 for the per-season analysis.As for the other stars, the first 17 spectra collected in early 2019 were left out of the per-season analysis. PCA analysis of Gl 408 Gl 408 has the strongest mean profile compared to the meansubtracted profile in our sample (see Fig. 16a and C9).The first eigenvector of Gl 408 (the only one showing a signal) is already noisy, a strong indication of a very axisymmetric field topology (see Fig. 16b).Fig. 17 (top) presents the QP GPR fit of 1 , which gives a rot = 175 +12 −14 d similar to D23 determining rot = 171.0± 8.4 d.Fitting ℓ with our GP routines, we derive a rot = 170.7 +7.1 −9.8 d (see Fig. 17 bottom).The decay time was fixed at 200 d for both variables following D23.However, we find a decay time of ≈ 200 ± 70 d but higher 2 for GPR fits without fixing the decay time.The APERO reduced spectra of Gl 408 did not allow Fouqué et al. (2023) to determine a rotation period. In Fig. 16c, we see that 1 is mostly flat for all three seasons, again indicating a highly axisymmetric topology.All mean profiles are antisymmetric with respect to the line centre and show an axisymmetric poloidal large-scale field. ZDI reconstructions of Gl 408 All three seasons could be fitted down to 2 ≈ 1.0 assuming rot = 171.0d, = 0.1, sin = 0.10 km s −1 , = 60 • .The topology changes little over the three seasons and is characterised by a strong, axisymmetric, poloidal dipole of negative polarity (see Fig. 18).It is the most stable topology in our sample and only ⟨ ⟩ varies marginally between 106 − 130 G (see Tab. 8).We find a 1 error bar on the axisymmetric toroidal field of 55 − 81 G for Gl 408 whereas the reconstructed ⟨ tor ⟩ ranges between 4 − 13 G.Similar to Gl 617B, we also determine the ZDI maps for an inclination of = 30 • and sin = 0.06 km s −1 (see Fig. C3).The 2 values of the ZDI fits are again slightly higher for the lower inclination = 30 • than for = 60 • . SUMMARY, DISCUSSION & CONCLUSIONS In this paper, we study the large-scale magnetic field of six slowly rotating mid to late M dwarfs observed with SPIRou at the CFHT as part of the SLS from 2019 to 2022.The 3.5-yr time series, including ≈ 100 − 200 polarimetric spectra for each of our six M dwarfs, allowed us to confirm their rotation periods and to investigate their magnetic field topology using both our PCA analysis and ZDI.We use the reduced observations from D23 but different analysis tools to redetermine the rotation period.Our estimate of the rotation periods using 1 , i.e., the scaled and translated first coefficient of the PCA analysis (see Sec. 3.2), agrees with the results of D23 and Fouqué et al. (2023).We confirm that both Gl 617B and Gl 408, for which Fouqué et al. (2023) did not recover a rotation period, host very axisymmetric topologies with axi ≥ 0.97 between 2019 and 2022.The higher the axisymmetry of the large-scale field, the smaller are the variations of ℓ or 1 with time, and the harder it is to determine a rotation period.For the highly axisymmetric topologies, we find that the 2 of the GPR fits increases, reflecting that in such cases, the ℓ curves are more sensitive to intrinsic variability and less to rotational modulation, and thereby reducing the ability at measuring rotation periods (see e.g.Fig. A5 or A6 and Tab.B1). Using the PCA analysis, we derive information about axisymmetry and complexity directly from the LSD Stokes time series, which are in agreement with the results obtained from the ZDI maps for all six M dwarfs, while PCA does not rely on any assumptions about stellar parameters such as sin , inclinations, etc. We find evidence for a polarity reversal of the large-scale field (via sign changes of ℓ , 1 or in the mean profiles) taking place on GJ 1151 and possibly also on Gl 905, for which the axisymmetric component collapsed during the last season (to be confirmed with new, ongoing, observations).For most stars, PCA traces the time-evolving field topologies using only the first eigenvector.For GJ 1289 we even detect two evolving field components directly from the Stokes time series.This highlights that we are able to reliably detect topological complexity in the magnetic fields of slowly rotating M dwarfs directly from the observed LSD Stokes profiles.The lower the sin , the higher the 1 error bar on the toroidal field.The typical 1 error bars on the toroidal field ranges from 6 to 450 G depending on SNR and sin . We determined the ZDI maps for each season of our targets, obtaining a total of 17 vector magnetic field maps.The ZDI maps of GJ 1151 and Gl 905 confirm the polarity switches that were diagnosed with PCA, and further show that GJ 1151 may have been in a magnetically quiescent state until it became more magnetic in 2022, switching polarity at the same time. The slowly rotating M dwarfs of our sample show large-scale field strengths in the range ⟨ ⟩ ≈ 20−200 G.They show similar ⟨ ⟩ to faster rotating M dwarfs in the saturated regime.We add our sample to 100 days.Besides, it implies a harsher interplanetary environment for potential close-in planets (e.g., Kavanagh et al. 2021).We stress that our paper focused on the most magnetic M dwarfs of the SLS sample (e.g., D23), whereas the other (less magnetic) stars of this sample will presumably be more in line with (and fill the gap between) the high- stars of the Vidotto et al. ( 2014) sample.This will be the subject of forthcoming studies.Beside, our results suggests that the large-scale fields of the very slowly rotating M dwarfs of our sample are likely generated through dynamo processes operating in a different regime than those of the faster rotators that have been magnetically characterized so far.Fig. 20 summarises the properties of the large-scale magnetic field topology for our six M dwarfs displaying all seasons on top of each other.It can be seen that the two partly convective M dwarfs (Gl 617B and Gl 408) show a smaller range of variations compared to the fully convective stars.The fully convective M dwarfs host large-scale fields that evolve on timescales comparable to their rotation periods.Our small sample suggests that fully-convective, slowly-rotating M dwarfs tend to have large-scale fields that are less axisymmetric than their more massive counterparts. In conclusion, we have analysed six slowly rotating M dwarfs observed by the SLS over 3.5 years.We find, that the large-scale magnetic field of these M dwarfs is unusually strong despite their slow rotation (40-190 d) and suggest that the efficiency of the dynamo for mid and late M dwarfs depends on in a different way than that reported in the literature for faster rotators.Furthermore, we find that the large-scale magnetic field topology of the fully convective M dwarfs exhibit a larger range of variations than those of the two partly convective targets of our sample.Given this, it may be useful in the future to apply the time-dependent ZDI (Finociety & Donati 2022), which has only been tested for faster rotating stars up to now.We detected a polarity reversal on one (GJ 1151) and possibly two (Gl 905) of the 4 fully-convective stars of our sample, suggesting that magnetic cycles may indeed be occurring in such stars, as initially suggested by Route (2016) from radio observations.Further longterm observations of the same type are needed to document in a more systematic fashion the long-term evolution of the large-scale magnetic fields of M dwarfs, and whether these field topologies are varying cyclically like for the Sun or in a more random fashion.MNRAS 000, 1-16 (2022) 7 +3.0 −3.2 d and decay time of = 133 +18 −22 d very similar to the values derived by D23 ( rot = 114.3± 2.8 d and = 129 +25 −21 d) and Fouqué et al. (2023) ( rot = 109.5+4.9 −5.4 d and = 149 +26 −25 d) and also consistent with our GP fit of D23's ℓ values ( rot = 114.4+3.5 −2.4 d and = 130 +25 −32 d), see Tab. 2 and B1.For consistency, we use the rotation periods found by D23 to determine the rotation phase (see Tab. A1) and to model the ZDI maps for all six stars in our sample (see Tab. 1). Figure 1 . Figure 1.The PCA analysis for Gl 905.a.The mean profile (red) for all observations and its decomposition in the antisymmetric (blue dashed) and symmetric (yellow dotted) components (with respect to the line centre) related to the poloidal and toroidal axisymmetric field, respectively.This mean profile is used to determine the mean-subtracted Stokes profiles to which we apply PCA, yielding the eigenvectors and coefficients shown in panels b and c. b.The first two eigenvectors of the mean-subtracted Stokes profiles.c.The mean profile (left column), 1 (the scaled and translated first PCA coefficient introduced in Sec.3.2, middle column) and the coefficients of the second eigenvector (right column) for each season (one season per row).The mean profiles of the individual seasons are plotted in the same format as above.The coefficients are colour-coded by rotation cycle. Figure 2 . Figure 2. Temporal variations of 1 (top) and longitudinal field ℓ (bottom) for Gl 905.We show the QP GPR fit and its 1 area as green shaded region in the top panel and the residuals in the bottom panel for both variables.The plot symbols are colour-coded by rotation cycle.The vertical grey lines separate the analysed seasons.The grey shaded region indicates a season for which not enough data were available for a reliable PCA and ZDI analysis. Figure 3 . Figure 3.The magnetic field maps of Gl 905 shown in a flattened polar view for the radial (top row), meridional (middle row) and azimuthal component (bottom row).In each plot, the visible north pole is in the centre, the thick line depicts the equator and the dashed line the latitudes in 30 • step.The ticks outside the plot illustrate the observing phases.The different seasons are shown next to each other (one season per column).The colour bar below the third row is used for all maps and indicates the magnetic field strength in G.The bottom panel summarises the main characteristics of the large-scale field of Gl 905 and its evolution with time.For each season, the symbol size indicates the magnetic energy, the symbol shape the fractional energy in the axisymmetric component and the symbol colour the fractional energy stored into the poloidal component of the field (see legend to the right). Figure 9 . Figure 9. Same as Fig. 3 for GJ 1151.Note, that the magnetic field switches polarity. 5 −2.6 d in agreement with the results of D23 ( rot = 40.4± 3.0 d) and our GP fit of ℓ ( rot = 40.6 +2.1 −4.4 d, see Fig. 14).However, the decay time for the GP fit of 1 with = 35 +8 −4 d is shorter than the results determined from the ℓ curves ( = 69 +35 −23 d for D23 and = 82 +45 −30 d from our own fit of ℓ ).Fouqué et al. (2023) found no clear periodic variation using the APERO pipeline reduced spectra of Gl 617B. Figure 19 . Figure 19.The averaged unsigned magnetic field strength ⟨ ⟩ versus Rossby number as shown by Vidotto et al. (2014) (in grey scales) including our sample of slowly rotating M dwarfs (red circles).Note that for this figure only, we determined using Wright et al. (2011) for consistency with Vidotto et al. (2014).For further details and the coloured version of the original symbols and annotations see Fig. 4a of Vidotto et al. (2014). Figure A1 . Figure A1.The posterior density resulting from the MCMC sampling of the QP best-fitting GPR model for 1 (top) and ℓ (bottom) for Gl 905.The concentric circle within each panel indicate the 1, 2 and 3 contours of the distribution.The blue lines mark the mode (maximum) of the posterior distribution and the black dashed lines the median and the 16% and 84% percentils of the posterior probability density function (PDF). Figure C1 . Figure C1.The mean profile and its decompositions obtained from 24 uniformly phased synthetic Stokes LSD profiles of the 2020/21 ZDI map of Gl 905.The same format as in Fig. 1a is used. Figure C2 . Figure C2.The magnetic fields maps of Gl 617B using an inclination = 30 • presented in the same format as in Fig. 3. Figure C3 . Figure C3.The magnetic fields maps of Gl 408 using an inclination = 30 • presented in the same format as in Fig. 3. Figure C4 . Figure C4.The SPIRou observed LSD Stokes profiles (black) and the ZDI fit (red) for Gl 905.The rotation phase is indicated to the right of the profile and the error to the left.Each panel corresponds to one season. Table 1 . The stellar characteristics of our sample are from 2 , ) and the number of observations per season (nb.obs). Table 4 . Same as Table3for GJ 1289.The tilt angle now refers to the positive pole. Table 6 . Same as Table 4 for GJ 1286.
10,778.6
2023-11-08T00:00:00.000
[ "Physics" ]
An Investigation of Surface Corrosion Behavior of Inconel 718 after Robotic Belt Grinding Surface corrosion resistance of nickel-based superalloys after grinding is an important consideration to ensure the service performance. In this work, robotic belt grinding is adopted because it offers controllable material processing by dynamically controlling process parameters and tool-workpiece contact state. Surface corrosion behavior of Inconel 718 after robotic belt grinding was investigated by electrochemical testing in 3.5 wt % NaCl solution at room temperature. Specimens were characterized by morphology, surface roughness and residual stress systematically. The potentiodynamic polarization curves and electrochemical impedance spectroscopy (EIS) analysis indicate the corrosion resistance of the specimen surface improves remarkably with the decrease of abrasive particle size. It can be attributed to the change of surface roughness and residual stress. The energy dispersive X-ray spectroscopy (EDS) indicates that niobium (Nb) is preferentially attacked in the corrosion process. A plausible electrochemical dissolution behavior for Inconel 718 processed by robotic belt grinding is proposed. This study is of significance for achieving desired corrosion property of work surface by optimizing grinding process parameters. Introduction Inconel 718 superalloy has been widely utilized in gas turbine engineering, submarine, nuclear reactors, oil and gas production parts because of its good service performance [1][2][3]. For ship gas turbine and submarine which mainly work in the marine environment, the corrosion behavior of Inconel 718 must be fully understand to adopt effective countermeasures ensuring the safety of the alloy system. In addition, a good corrosion resistance is beneficial to the service life of the parts because most cracks commence at the surface of the material. Corrosion performance is mainly affected by material compositions [4] and surface states such as surface roughness and surface stress [5]. The high nickel content of Inconel 718 makes the alloy have a relatively strong resistance to chloride stress cracking corrosion. At the same time, due to the presence of chromium, corrosion resistance is even better than pure nickel in the oxidizing environment. Wang et al. [6] investigated the corrosion behavior of Inconel 718 in electrochemical machining, suggesting that the generated niobium carbide and niobium oxide have an important influence on the corrosion process of nickel-based superalloy. Jebaraj et al. [7] reported hydrogen permeation of Inconel 718 in different states and found that the hydrogen trapping in cold rolled and precipitation hardened Inconel 718 is irreversible. These studies objectively analyze the chemical elements influencing the corrosion performance of the alloy. Additional effective measures need to be devised to improve corrosion performance. In order to enhance the corrosion resistance of nickel-based superalloy, various processing technologies were globally employed. Karthik et al. [8] reported the enhanced corrosion performance in Inconel 600 through laser shock peening without coating. They found the larger and deeper compressive residual stresses and smaller surface roughness are the main reasons for the increased corrosion performance. However this study was conducted on Inconel 600, and was not supported by an actual corrosion experiment, such as immersion tests. Huang et al. [9] investigated the electrochemical corrosion behavior of Inconel 718 sheets treated by electron beam welding in 3.5 wt % NaCl solution. To reduce the adverse impact on corrosion performance, they adopted a method of subsequent heat treatment. Khan et al. [10] studied the corrosion performance of Inconel 718 in the simulated body fluid. The results show that laser surface-modified process can improve the corrosion resistance of materials effectively. Akyol et al. [11] deposited Ni-P-W-CNF (Carbon Nanofibers) composites on the works by electroless method, and obtained a good wear and corrosion resistance. However, this improvement approach causes pollution. Narayanan et al. [12] analyzed the change of impingement angle to compare the corrosion resistance of nickel-based superalloy which was laser surface treated. The corrosion performance was increased by about 1.5 times due to the minimized energy transfer of laser. It is a feasible process to improve erosion resistance under the premise of ensuring efficiency. Arrizubieta et al. [13] combined laser material deposition and laser surface processes for the complete manufacturing of Inconel 718 components. The surface quality was improved and the roughness was reduced. Nevertheless, the process is relatively complicated. Karthik et al. [5] reviewed laser peening without protective coating technology as a mechanical surface modification method. The method can significantly improve the corrosion resistance of metallic materials due to the factors including surface roughness and compressive residual stress. These studies analyze the corrosion properties and influencing factors of nickel-based alloys under different conditions. However, methods to improving corrosion resistance require additional processing, such as material deposition, laser peening, etc. These measures not only consume time and resources, but also inevitably increase production costs. It is desired that a robotic belt grinding process could improve the corrosion performance by optimizing its process parameters and without additional processing. Robotic belt grinding offers controllable material processing by dynamically controlling process parameters and tool-workpiece contact state, which is otherwise difficult to achieve by manual grinding [14]. While conventional CNC grinding machine offers position control, a robotic grinding system can readily incorporate both position and force control. As a result, desired profile finishing accuracy and surface properties can be obtained by online tool condition monitoring and optimal process control [15,16]. As such, we have adopted robotic belt grinding in this study. Pradhan et al. [17] analyzed the influence of surface roughness on corrosion behavior of Inconel 718 in a simulated marine environment at high temperature for a relatively long time. The results show that the higher roughness increases the surface area for the corrosion behavior and reduces the corrosion resistance of the works. However, roughness is only one of the contributing factors affecting corrosion resistance. Based on our previous study [18], we found that the finished surface obtains a considerable residual stress, which has a significant impact on surface corrosion performance. This phenomenon, which is neither fully understood nor discussed by existing works, is investigated in detail in this paper. By controlling appropriate parameters, the robotic belt grinding process can achieve the ideal compressive residual stress in the sub-surface and lower surface roughness, which improves surface corrosion performance. To the best of our knowledge, the mechanism of residual stress affecting corrosion Ni-based superalloys has not been reported elsewhere. Tressia et al. [19] investigated the influence of abrasive particle size and different aqueous solution on the abrasive wear mechanisms in the grinding process. Turnbull et al. [20] reported the sensitivity of stress corrosion cracking of stainless steel to surface machining and grinding procedure. The above research works mostly based on manual grinding suffer from their uncontrollability and inconsistency. Robotic belt grinding is a relatively new precision machining technique, and has attracted great attention for its advantages over conventional manual grinding procedure [21,22]. Although great progress on the corrosion behavior of nickel-based superalloy has been made, the related research on surface corrosion behavior and influencing factors of nickel-based superalloy after robotic belt grinding has been barely reported. In this work, electrochemical corrosion of Inconel 718 was investigated to evaluate the corrosion behavior of the alloy under robotic belt grinding. The experiment was carried out in 3.5 wt % NaCl solution, which is a commonly used corrosive solution in electrochemical corrosion experiments [23]. The potentiodynamic polarization curves and electrochemical impedance spectra (EIS) were measured. Furthermore, the corroded specimens were examined by scanning electron micrograph (SEM) and energy dispersive X-ray spectroscopy (EDS). The influencing factors including surface roughness and residual stress were discussed in detail. Materials The nickel-based superalloy Inconel 718 used in this study was provided by Baoshan Iron & Steel Co., Ltd. (Shanghai, China). The Inconel 718 was hot forging state, and annealed at 980 • C for 10 min followed by air cooling. Then, the materials were acid pickled. The acid used is a mixed solution, whose concentration is 10 vol % HNO 3 + 7 vol % HF. The purpose of acid pickling is to remove the oxide formed on the surface of the material during heat treatment. Its chemical compositions were investigated by inductively coupled plasma-optical emission spectroscopy (ICP-OES) and the results are listed in Table 1. The as-received material was cut into square bars with dimensions of 15 mm × 15 mm × 500 mm using electric discharge machine wire cutting. The annealing process can improve the ductility and formability, which is meaningful for the subsequent processing. Experiment Set-Up The robotic belt grinding system used in this study mainly consists of four components which are control cabinet, industrial PC, FANUC (Fuji Automatic Numerical Control) robot with a force control sensor and a belt machine, as shown in Figure 1. The abrasive belt was supplied by 3 M China Limited, and its abrasive particles is special Al 2 O 3 stocked on an elastic paper strip reinforced with fibers. inconsistency. Robotic belt grinding is a relatively new precision machining technique, and has attracted great attention for its advantages over conventional manual grinding procedure [21,22]. Although great progress on the corrosion behavior of nickel-based superalloy has been made, the related research on surface corrosion behavior and influencing factors of nickel-based superalloy after robotic belt grinding has been barely reported. In this work, electrochemical corrosion of Inconel 718 was investigated to evaluate the corrosion behavior of the alloy under robotic belt grinding. The experiment was carried out in 3.5 wt % NaCl solution, which is a commonly used corrosive solution in electrochemical corrosion experiments [23]. The potentiodynamic polarization curves and electrochemical impedance spectra (EIS) were measured. Furthermore, the corroded specimens were examined by scanning electron micrograph (SEM) and energy dispersive X-ray spectroscopy (EDS). The influencing factors including surface roughness and residual stress were discussed in detail. Materials The nickel-based superalloy Inconel 718 used in this study was provided by Baoshan Iron & Steel Co., Ltd. (Shanghai, China). The Inconel 718 was hot forging state, and annealed at 980 °C for 10 min followed by air cooling. Then, the materials were acid pickled. The acid used is a mixed solution, whose concentration is 10 vol % HNO3 + 7 vol % HF. The purpose of acid pickling is to remove the oxide formed on the surface of the material during heat treatment. Its chemical compositions were investigated by inductively coupled plasma-optical emission spectroscopy (ICP-OES) and the results are listed in Table 1. The as-received material was cut into square bars with dimensions of 15 mm × 15 mm × 500 mm using electric discharge machine wire cutting. The annealing process can improve the ductility and formability, which is meaningful for the subsequent processing. Experiment Set-Up The robotic belt grinding system used in this study mainly consists of four components which are control cabinet, industrial PC, FANUC (Fuji Automatic Numerical Control) robot with a force control sensor and a belt machine, as shown in Figure 1. The abrasive belt was supplied by 3 M China Limited, and its abrasive particles is special Al2O3 stocked on an elastic paper strip reinforced with fibers. The prepared specimens were ground by the optimum processing parameters of grinding force 178 kPa and belt speed 21 m/s. Three kinds of belt with different particle size were selected, which are 36, 80 and 120 M, corresponding to grain sizes of about 500, 178, and 125 µm, respectively. The mesh number is inversely proportional to the particle size. The larger the mesh is, the smaller the abrasive particle. Therefore, there are three sets of parameters in this experiment: 36 M with 178 kPa and 21 m/s, 80 M with 178 kPa and 21 m/s, 120 M with 178 kPa and 21 m/s. Test Methods After the grinding process, a series of tests were carried out. The surface roughness of the ground specimens was tested with a roughness tester (SJ-410, Mitutoyo, Kawasaki, Japan) and the measuring direction was perpendicular to the grinding direction. The residual stress states on the ground surface were measured by an X-ray Stress Analyzer (LXRD, Proto, Sacramento, CA, USA) with Mn-K α radiation and Cr filter, using standard sin 2 ψ method under 18 kV (voltage) and 4 mA (current). The (311) plane with a 2θ of 165.32 was chosen as the shifts of diffraction peaks. To evaluate the corrosion resistance of the alloy, electrochemical testing was employed at room temperature in 3.5 wt % NaCl solution and free air. The samples were held in corrosive solution for 65 min, including 20 min of immersion time and 45 min of corrosion time. It was conducted using a CHI 660E electrochemical system with Saturated Calomel Electrode (SCE), Pt wire and the ground specimen as reference, auxiliary and working electrode respectively. The dimension of the test specimen was 10 mm × 10 mm with an area of 100 mm 2 . Potentiodynamic polarization testing was performed over the applied potential ranging from −1.2 V to 1.5 V at a sweep rate of 1 mV/s. Each potentiodynamic polarization measurement was repeated three times to obtain a reliable result. Tafel extrapolation method was used to calculate the corrosion potential (E corr ) and corrosion current (I corr ). EIS testing was conducted with the same device under the same condition. The perturbation voltage is 10 mV. The frequency ranges from 100 kHz to 10 mHz. The EIS test was carried out at open circuit potential (E corr ). EIS test was also repeated three times. Immersion tests were also conducted after the grinding process. The ground specimens of 15 mm × 15 mm were rinsed carefully by acetone, ethanol and distilled water in turn. The immersion tests were carried out in 3.5 wt % NaCl solution at room temperature. Corrosion products were removed by chemical cleaning with 10 vol % HNO 3 + 7 vol % HF. The mass loss of the specimens were measured by an analytical balance (FA124, Sunny Hengping, Shanghai, China) with a precision of 10 µg. Each mass loss was measured for three times to obtain a reliable result. Morphology characterization was accomplished using a JSM-7600F field-emission scanning electron microscope (SEM, JEOL, Akishima, Japan) at the accelerating voltage of 5 kV and probe current of 2 × 10 −10 A. The SEM machine was equipped with an energy dispersive X-ray spectroscopy (EDS) to identify the compositions of specimen surface. The accelerating voltage is 20 kV and the working distance is 15 mm. Figure 2 shows potentiodynamic polarization curves of the ground surface with different abrasive particle sizes. The potentiodynamic polarization curves exhibit similar shape for the three specimens and there are obvious passivation regions in accord with the current response [24]. The current density has an increasing trend as the applied anodic voltage increases. In working electrodes, there exists a cathodic reduction process, which is a hydrogen evaluation reaction [25]. Electrochemical Analysis In the anodic regions, there is a typical active-passive-trans-passive behavior, displaying a limiting current density with the increase of the corrosion potential. This is owing to the existence of the compact oxidation film on the ground surface [26]. The passivation region begins near the corrosion potentials and there is a rapid increase of current density when the applied potential increases to a certain value around 1.25 V due to the dissolution of the oxidation film. In addition, the pit corrosion occurs according to the steep increase of the current. Simultaneously, the oxidation reaction of Inconel 718 proceeds, leading to a passivation oxidation film on the specimen surface. To compare the corrosion performance of the ground specimens clearly, corrosion current density (Icorr) and corrosion potential (Ecorr) are calculated from the potentiodynamic polarization curves [27], as presented in Table 2. The corresponding errors are also listed and have a good repeatability. It can be found that corrosion potential (Ecorr) increases forwards the positive direction and the corrosion current density (Icorr) decreases with the decrease of the particle size. It is well known that the more positive the corrosion potential is, the higher corrosion resistance [28]. In addition, the corrosion property is also influenced by corrosion rate, which can be represented by the corrosion current density (Icorr) according to Faraday's law [29]. The specimen ground by particle size of 120 M shows the highest corrosion potential and the lowest corrosion current density, indicating the best corrosion property. From the analysis above, it can be concluded that the corrosion property of the specimen surface increases remarkably with the decrease of particle size. Table 2. Tafel polarization parameters of the ground specimens with different abrasive particle sizes. Particle Size (M) Ecorr (mV vs. SCE) Icorr (μA/cm 2 ) 36 −800 (± 13) 35.74 (± 2.5) 80 −680 (± 9) 10.64 (± 1.2) 120 −627 (± 11) 5.28 (± 0.6) In order to further understand the electrochemical corrosion behavior of Inconel 718 treated by robotic belt grinding, EIS testing was carried out. Typical Nyquist plots of the specimens are shown in Figure 3. There is an obvious capacitance loop, because the corrosion process is dominated by the charge transfer step. The capacitance loop is not a standard semicircle due to the heterogeneous corrosion surface [30]. The diameter of capacitance loop increases with the decrease of abrasive particle size. The value of the diameter represents the corrosion resistance of the test specimen [31]. Samples ground using larger particle sizes have better corrosion performance [32]. It can be concluded that a smaller abrasive particle size is beneficial to the corrosion resistance of the specimen surface. The results of EIS testing are consistent with the potentiodynamic polarization tests well. To compare the corrosion performance of the ground specimens clearly, corrosion current density (I corr ) and corrosion potential (E corr ) are calculated from the potentiodynamic polarization curves [27], as presented in Table 2. The corresponding errors are also listed and have a good repeatability. It can be found that corrosion potential (E corr ) increases forwards the positive direction and the corrosion current density (I corr ) decreases with the decrease of the particle size. It is well known that the more positive the corrosion potential is, the higher corrosion resistance [28]. In addition, the corrosion property is also influenced by corrosion rate, which can be represented by the corrosion current density (I corr ) according to Faraday's law [29]. The specimen ground by particle size of 120 M shows the highest corrosion potential and the lowest corrosion current density, indicating the best corrosion property. From the analysis above, it can be concluded that the corrosion property of the specimen surface increases remarkably with the decrease of particle size. In order to further understand the electrochemical corrosion behavior of Inconel 718 treated by robotic belt grinding, EIS testing was carried out. Typical Nyquist plots of the specimens are shown in Figure 3. There is an obvious capacitance loop, because the corrosion process is dominated by the charge transfer step. The capacitance loop is not a standard semicircle due to the heterogeneous corrosion surface [30]. The diameter of capacitance loop increases with the decrease of abrasive particle size. The value of the diameter represents the corrosion resistance of the test specimen [31]. Samples ground using larger particle sizes have better corrosion performance [32]. It can be concluded that a smaller abrasive particle size is beneficial to the corrosion resistance of the specimen surface. The results of EIS testing are consistent with the potentiodynamic polarization tests well. To further explain the corrosion behavior, an equivalent circuit diagram was used, as shown in the insert of Figure 3. The RS represents the solution resistance, RCT represents charge transfer resistance at the interface. Moreover, pure capacitors were substituted by constant phase elements (CPE) and the impedance of CPE was defined by equation: where is the frequency independent constant; n is the exponential coefficient, the value of which is between 0 and 1; j is an imaginary unit and ω is an angular frequency [33]. The fitted resistance values RCT were listed in Figure 4. It could be found that the RCT increases as the abrasive particle size decreases, indicating that the corrosion resistance increases with the decrease of abrasive particle size in NaCl solution. Corrosion Morphology Analysis Ideally, electrochemical corrosion of metals should be uniform and homogeneous. Nevertheless, Inconel 718 contains many other elements to improve its mechanical properties, resulting in nonuniformity and inhomogeneity. To observe the corrosion morphology, the corroded surfaces of the ground specimens are investigated by SEM equipped with EDS to determine the change of alloying elements after the measurement of potentiodynamic polarization curves. Figure 5 shows SEM images of the corroded surface under different abrasive particle sizes. There are many corrosion pits and corrosion products with a size of about 50 μm, which indicates serious corrosion in the surface of specimens. To clearly compare the degree of corrosion behavior, the average number of corrosion pits per unit area is counted. Twenty different regions are chosen to calculate the average number of corrosion pits. The results are presented in Figure 6. It can be seen that the number of corrosion pits To further explain the corrosion behavior, an equivalent circuit diagram was used, as shown in the insert of Figure 3. The R S represents the solution resistance, R CT represents charge transfer resistance at the interface. Moreover, pure capacitors were substituted by constant phase elements (CPE) and the impedance of CPE was defined by equation: Z CPE = Q(jω) n −1 , where Q is the frequency independent constant; n is the exponential coefficient, the value of which is between 0 and 1; j is an imaginary unit and ω is an angular frequency [33]. The fitted resistance values R CT were listed in Figure 4. It could be found that the R CT increases as the abrasive particle size decreases, indicating that the corrosion resistance increases with the decrease of abrasive particle size in NaCl solution. To further explain the corrosion behavior, an equivalent circuit diagram was used, as shown in the insert of Figure 3. The RS represents the solution resistance, RCT represents charge transfer resistance at the interface. Moreover, pure capacitors were substituted by constant phase elements (CPE) and the impedance of CPE was defined by equation: where is the frequency independent constant; n is the exponential coefficient, the value of which is between 0 and 1; j is an imaginary unit and ω is an angular frequency [33]. The fitted resistance values RCT were listed in Figure 4. It could be found that the RCT increases as the abrasive particle size decreases, indicating that the corrosion resistance increases with the decrease of abrasive particle size in NaCl solution. Corrosion Morphology Analysis Ideally, electrochemical corrosion of metals should be uniform and homogeneous. Nevertheless, Inconel 718 contains many other elements to improve its mechanical properties, resulting in nonuniformity and inhomogeneity. To observe the corrosion morphology, the corroded surfaces of the ground specimens are investigated by SEM equipped with EDS to determine the change of alloying elements after the measurement of potentiodynamic polarization curves. Figure 5 shows SEM images of the corroded surface under different abrasive particle sizes. There are many corrosion pits and corrosion products with a size of about 50 μm, which indicates serious corrosion in the surface of specimens. To clearly compare the degree of corrosion behavior, the average number of corrosion pits per unit area is counted. Twenty different regions are chosen to calculate the average number of corrosion pits. The results are presented in Figure 6. It can be seen that the number of corrosion pits Corrosion Morphology Analysis Ideally, electrochemical corrosion of metals should be uniform and homogeneous. Nevertheless, Inconel 718 contains many other elements to improve its mechanical properties, resulting in non-uniformity and inhomogeneity. To observe the corrosion morphology, the corroded surfaces of the ground specimens are investigated by SEM equipped with EDS to determine the change of alloying elements after the measurement of potentiodynamic polarization curves. Figure 5 shows SEM images of the corroded surface under different abrasive particle sizes. There are many corrosion pits and corrosion products with a size of about 50 µm, which indicates serious corrosion in the surface of specimens. To clearly compare the degree of corrosion behavior, the average number of corrosion pits per unit area is counted. Twenty different regions are chosen to calculate the average number of corrosion pits. The results are presented in Figure 6. It can be seen that the number of corrosion pits has an obvious decreasing trend as the abrasive particle size decreases. It can be concluded that the size reduction of abrasive particle facilitates the improvement of corrosion performance. has an obvious decreasing trend as the abrasive particle size decreases. It can be concluded that the size reduction of abrasive particle facilitates the improvement of corrosion performance. To further investigate the corrosion behavior of the Inconel 718 under robotic belt grinding system, the change of alloying elements on the corroded surface are determined by EDS. Figure 7 shows the EDS results for different regions (uncorroded region, corrosion products, and corrosion pits) on the corroded surface. Compared with the uncorroded region and the corrosion pits, the content of Nb and O is the highest in corrosion products, which is similar to the previous results [6]. There might exist Nb-rich regions in the corrosion products. The boundary of the Nb-rich region was firstly eroded. The content of Nb and O in the corrosion pits decreases with the shedding of corrosion products. has an obvious decreasing trend as the abrasive particle size decreases. It can be concluded that the size reduction of abrasive particle facilitates the improvement of corrosion performance. To further investigate the corrosion behavior of the Inconel 718 under robotic belt grinding system, the change of alloying elements on the corroded surface are determined by EDS. Figure 7 shows the EDS results for different regions (uncorroded region, corrosion products, and corrosion pits) on the corroded surface. Compared with the uncorroded region and the corrosion pits, the content of Nb and O is the highest in corrosion products, which is similar to the previous results [6]. There might exist Nb-rich regions in the corrosion products. The boundary of the Nb-rich region was firstly eroded. The content of Nb and O in the corrosion pits decreases with the shedding of corrosion products. To further investigate the corrosion behavior of the Inconel 718 under robotic belt grinding system, the change of alloying elements on the corroded surface are determined by EDS. Figure 7 shows the EDS results for different regions (uncorroded region, corrosion products, and corrosion pits) on the corroded surface. Compared with the uncorroded region and the corrosion pits, the content of Nb and O is the highest in corrosion products, which is similar to the previous results [6]. There might exist Nb-rich regions in the corrosion products. The boundary of the Nb-rich region was firstly eroded. The content of Nb and O in the corrosion pits decreases with the shedding of corrosion products. Figure 8 shows the average mass loss of the specimens ground by different abrasive particles after different immersion times. The average mass loss of the specimens increases with the extension of immersion time. It is also found that the average mass loss decreases with the decrease of the Figure 8 shows the average mass loss of the specimens ground by different abrasive particles after different immersion times. The average mass loss of the specimens increases with the extension of immersion time. It is also found that the average mass loss decreases with the decrease of the abrasive particle size. The results indicate that the corrosion resistance of the ground surface is improved with the decrease of abrasive particle size. Mass Loss Analysis Materials 2018, 11, x FOR PEER REVIEW 9 of 13 abrasive particle size. The results indicate that the corrosion resistance of the ground surface is improved with the decrease of abrasive particle size. Surface Roughness Influence In order to further elucidate the influencing factors of corrosion resistance on nickel-based superalloys under robotic belt grinding, the surface roughness was tested. The average roughness (Ra) is measured with 2D stylus profiler over a length of 6 mm. The surface roughness of treated specimens as a function of different abrasive particle size is presented in Figure 9. There is an obviously monotonous decrease with the decrease of abrasive particle size. This is due to different surface ablation of materials during the grinding process. The larger the abrasive particle size is, the greater surface ablation and the higher surface roughness. The corrosion behavior is accelerated by the presence of large peaks and valleys which yields a larger ground surface area [34]. A larger surface area derived from higher roughness facilitates the diffusion of the electrolyte and promotes the corrosion rate [17], which is consistent with the improvement of corrosion performance as the abrasive particle size decreases. Moreover, the rough surface is harmful to the formation of the passive film [35], which is a protective film of the parts. Residual Stress Analysis Residual stress is also a significant factor affecting the corrosion property in the grinding process [36]. During processing, metals often form part deformation and non-uniform stress. In grinding process, the residual stress forms on the specimen surface mainly due to the plastic deformation and Surface Roughness Influence In order to further elucidate the influencing factors of corrosion resistance on nickel-based superalloys under robotic belt grinding, the surface roughness was tested. The average roughness (Ra) is measured with 2D stylus profiler over a length of 6 mm. The surface roughness of treated specimens as a function of different abrasive particle size is presented in Figure 9. There is an obviously monotonous decrease with the decrease of abrasive particle size. This is due to different surface ablation of materials during the grinding process. The larger the abrasive particle size is, the greater surface ablation and the higher surface roughness. The corrosion behavior is accelerated by the presence of large peaks and valleys which yields a larger ground surface area [34]. A larger surface area derived from higher roughness facilitates the diffusion of the electrolyte and promotes the corrosion rate [17], which is consistent with the improvement of corrosion performance as the abrasive particle size decreases. Moreover, the rough surface is harmful to the formation of the passive film [35], which is a protective film of the parts. Materials 2018, 11, x FOR PEER REVIEW 9 of 13 abrasive particle size. The results indicate that the corrosion resistance of the ground surface is improved with the decrease of abrasive particle size. Surface Roughness Influence In order to further elucidate the influencing factors of corrosion resistance on nickel-based superalloys under robotic belt grinding, the surface roughness was tested. The average roughness (Ra) is measured with 2D stylus profiler over a length of 6 mm. The surface roughness of treated specimens as a function of different abrasive particle size is presented in Figure 9. There is an obviously monotonous decrease with the decrease of abrasive particle size. This is due to different surface ablation of materials during the grinding process. The larger the abrasive particle size is, the greater surface ablation and the higher surface roughness. The corrosion behavior is accelerated by the presence of large peaks and valleys which yields a larger ground surface area [34]. A larger surface area derived from higher roughness facilitates the diffusion of the electrolyte and promotes the corrosion rate [17], which is consistent with the improvement of corrosion performance as the abrasive particle size decreases. Moreover, the rough surface is harmful to the formation of the passive film [35], which is a protective film of the parts. Residual Stress Analysis Residual stress is also a significant factor affecting the corrosion property in the grinding process [36]. During processing, metals often form part deformation and non-uniform stress. In grinding process, the residual stress forms on the specimen surface mainly due to the plastic deformation and Figure 9. Surface roughness of treated specimens as a function of different abrasive particle sizes. Residual Stress Analysis Residual stress is also a significant factor affecting the corrosion property in the grinding process [36]. During processing, metals often form part deformation and non-uniform stress. In grinding process, the residual stress forms on the specimen surface mainly due to the plastic deformation and temperature change during grinding process [37]. Normally, the corrosion occurs easily in high tensile stress and high energy state [38]. A considerable residual stress occurs in ground metallic structure [39]. Based on the importance of stress, the surface residual stress were tested in two directions (the grinding direction is defined as X and material flowing direction is defined as Y) under different abrasive particle sizes, as depicted in Figure 10. The residual stresses in both directions exhibit a similar change trend. As the abrasive particle size decreases, the residual stress in the X direction changes from the tensile stress to the compressive stress. The residual stress in the Y direction is shown as the compressive stress, and the compressive stress gradually increases with the decrease of abrasive particle size. The effect of residual stress on corrosion performance is similar to that of stress corrosion. In stress corrosion, the metal exerts brittle cracking below yield strength in the specific corrosive medium and the tensile stress (including the applied stress and residual stress) [40]. The open circuit potential is always lower in the tensile stress concentration zone which acts as the anode of a corrosive cell. However, the compressive stress is beneficial to the formation of the passive film and enhances the corrosion resistance of the specimen surface [41,42]. The specimen surface ground by abrasive particle size of 120 M with the largest comprehensive residual stress in both directions has the best corrosion performance. This further consolidates our research result. Electrochemical Dissolution Behavior Based on the results and discussion above, a plausible electrochemical dissolution behavior for Inconel 718 ground by the robotic belt grinding system can be schematically illustrated, as shown in Figure 11. The oxidation film has a porous structure which provides many natural active sites for corroding [43], and the dissolution process predominantly begins from the defective pores of the oxide film, as shown in Figure 11a. It is reasonable that the open circuit electrochemical potential of Nb is more negative, compared with other elements of Inconel 718. The Nb-rich regions occur at the defective pores of the oxide film during the corrosion process. Some corrosion products are generated in this regions. As the corrosion process continues, the dissolution occurs on the boundary between the Nb-rich region and nickel matrix according to the steep increase of current in Figure 2. The grooves are generated at the boundary which is in a high energy state and easily corroded. This is similar to the traditional pitting process [44]. With the further extension of corrosion time, the groove deepens due to the potential difference and high stress concentration, as presented in Figure 11b. With the corrosion process evolving, more products accumulate on the specimen surface and then shed into the electrolyte, leaving a large number of corrosion pits. During the shedding process, undissolved components is peeled off with the removal of the corrosion products [6]. Finally, the corrosion products shed to the electrolyte leaving the corrosion pits, as displayed in Figure 11c. In this experiment, the specimen surface ground by abrasive particle size of 120 M obtains the smallest roughness and the largest compressive residual stress. The smallest roughness generates a small corrosion surface area and the largest compressive residual stress protects corrosion products from shedding to electrolyte. Moreover, a small roughness can reduce the stress concentration. The combination of these factors ultimately leads to the best corrosion performance. The results further validate the electrochemical dissolution behavior of Inconel 718. Conclusions This paper has focused on the electrochemical corrosion behavior of Inconel 718 after robotic belt grinding. From the analysis above, the following conclusions can be made: 1. The corrosion resistance of the specimen surface improves remarkably with the decrease of surface roughness and residual stress, which result from the abrasive particle size. 2. Corrosion of Inconel 718 ground by the robotic belt grinding system proceeds from oxide film defect occurrence and Nb-rich region formation, to corrosion product generation. Then the corrosion products shed into the electrolyte due to the dissolution of the boundary, leaving a large number of corrosion pits. 3. The small roughness reduces the corrosion surface area and oxide film defect, increasing the resistance to corrosion product formation. In addition, the compressive residual stress can impede the exfoliation of corrosion products. This reasonably explains that small abrasive particle size improves the corrosion performance by producing small roughness and compressive residual stress. This work provides significant insights in understanding of corrosion behavior of Inconel 718 treated by robotic belt system, and analyzing the influencing factors including surface roughness and residual stress. Further research in precision robotic grinding of nickel-based superalloys will revolve around modelling and optimal control of the process parameters to achieve desired properties. Author Contributions: J.W. and X.C. proposed the concept of the research. J.W. and X.Z. designed and performed the experiments. X.R. and X.S. contributed the analysis tools. J.W. and J.X. wrote the paper. Funding: This research received no external funding.
8,619.8
2018-12-01T00:00:00.000
[ "Materials Science" ]
Selective Electroless Copper Plating of Ink-Jet Printed Textiles Using a Copper-Silver Nanoparticle Catalyst The electroless copper plating of textiles, which have been previously printed with a catalyst, is a promising method to selectively metallise them to produce high-reliability e-textiles, sensors and wearable electronics with wide-ranging applications in high-value sectors such as healthcare, sport, and the military. In this study, polyester textiles were ink-jet printed using differing numbers of printing cycles and printing directions with a functionalised copper–silver nanoparticle catalyst, followed by electroless copper plating. The catalyst was characterised using Transmission Electron Microscopy (TEM) and Ultraviolet/Visible (UV/Vis) spectroscopy. The electroless copper coatings were characterised by copper mass gain, visual appearance and electrical resistance in addition to their morphology and the plating coverage of the fibres using Scanning Electron Microscopy (SEM). Stiffness, laundering durability and colour fastness of the textiles were also analysed using a stiffness tester and Launder Ometer, respectively. The results indicated that in order to provide a metallised pattern with the desired conductivity, stiffness and laundering durability for e-textiles, the printing design, printing direction and the number of printing cycles of the catalyst should be carefully optimised considering the textile’s structure. Achieving a highly conductive complete copper coating, together with an almost identical and sufficiently low stiffness on both sides of the textile can be considered as useful indicators to judge the suitability of the process. Introduction In the last few decades, interest in electronic textiles (e-textiles) and wearable electronics has significantly increased, which has stimulated development in these areas. Electronic components such as sensors, batteries or lights can be embedded into fabrics to add functionalities or decorative effects. This added value resulting from the integration of fabrics with electronics has been (and continues to be) beneficial to a wide range of applications, including, but not limited to, medical and healthcare, fashion, military, and workwear [1]. Textile-based memory devices [2,3], displays [4,5], solar cells [6,7] and energy storage devices [8,9] are all recent advancements in this field. To be able to interconnect multiple components of an e-textile, certain areas of the textile need to be electrically conductive. Generally, there are two main approaches to the creation of conductive textiles [10]. The first one is to integrate the conductive elements such as conductive filaments [1] or metal wires [11] into textiles by weaving, knitting, embroidering, etc. Alternatively, in this approach, the original yarn fibres can be covered by a metal [12] or conductive polymer [13] and then incorporated into the fabric. Although high conductivities can be achieved using these methods, the rigidity and inflexibility of the integrated conductive elements may have a negative impact on the wear comfort of the fabric [10]. Moreover, the produced e-textiles can easily snap or be damaged due to the different mechanical properties of those filaments and wires from the original fabric [11]. The second approach to make textiles conductive is coating of the textiles with a metal or conductive polymer after they are manufactured. Conductive coatings can be deposited using various techniques such as screen printing [14], ink-jet printing [15][16][17], sputtering [14] and electroless plating [18][19][20]. A common advantage of these techniques is that they can be used on finished textiles and are cheaper to process. Furthermore, due to the relatively thin layers of deposited coatings, the flexibility of the textiles is maintained. However, potential drawbacks of this approach are the adhesion of the coatings and the possibility for them to oxidise [21]. Among the methods to coat textiles with a conductive layer, electroless plating is a promising one due to its industrial feasibility, low cost, deposit uniformity and high conductivity [21]. In this technique, deposition of metal is achieved by immersing the substrate into a water-based solution. The main reagents in an electroless plating electrolyte are the salt of the metal, which is being deposited, a complexing agent which prevents spontaneous metal deposition and a reducing agent. The solution also contains a stabiliser and is formulated in a way that deposition should only occur on the activated substrate's surface. Furthermore, a notable advantage of this process is its capability to metallise non-conductive materials, such as textiles. Electroless copper plating has been widely used in Printed Circuit Board (PCB) manufacturing to deposit a copper layer onto dielectric materials (such as epoxy-based polymers) [22]. In the case of non-conductive materials, for the electroless deposition reaction to initiate, the substrate needs to be activated [23]. The most commonly used activator is palladium, the role of which is to catalyse oxidation of the reducing agent. Once oxidised, the reducing agent donates electrons to the metal ion and the reduced metal deposits on the surface of the catalyst. Subsequently, the deposited metal acts as a catalyst and continues to catalyse the oxidation reaction, and therefore the deposit will continue to grow. Due to the increasing cost of palladium and its low abundance, researchers are now actively searching for alternative catalysts. For the electroless copper plating process, silver [24][25][26][27] and copper [20,28] nanoparticles are well studied as potential alternative catalysts. Both metals show catalytic activity towards the commonly used reducing agent-formaldehyde [29][30][31][32]. However, silver is still an expensive option, while copper is easily oxidised in air, hindering its catalytic activity. Previous research suggested that copper nanoparticles (Cu NPs) can be protected from oxidation by coating with silver metal. Moreover, some studies [33][34][35] showed that copper-silver composite nanoparticles have higher electrocatalytic activity compared to pure Cu or Ag NPs and could therefore be a promising catalyst for electroless plating. Synthesis of copper-silver core-shell nanoparticles (Cu core -Ag shell NPs) can be performed in two steps: the first step is the synthesis of core Cu NPs, and the second step is the reduction of silver in the presence of Cu NPs. The reduction of silver from its salt is achieved by a displacement reaction where Cu 0 is oxidised and donates an electron to Ag + , reducing it from its salt. As a result, the surface atoms of Cu NPs are replaced by silver [36][37][38]. To produce conductive tracks for e-textiles using electroless copper plating, the copper coating must be deposited on a certain area of the textile and not cover the whole surface. To achieve this, the catalyst should be deposited only on the required areas of the substrate. This selective deposition of the catalyst can be obtained using ink-jet printing [39,40], screen printing [41,42], microcontact printing [43,44] or using a gradient magnetic field [45]. Among the mentioned methods, ink-jet printing is a non-contact technique of catalyst deposition onto a substrate. The catalyst dispersion is the ink, which is stored in a cartridge. Once the process of printing begins, the ink is ejected from the printer head and drops onto the substrate surface under the force of gravity. The ink is deposited in a specific pattern, which is defined digitally by a CAD file. The main advantages of this approach can be summarized as (i) less contamination due to the lack of contact between the substrate and the equipment, (ii) lower consumption of ink, and (iii) the fact that the process does not require physical shape-transfer panels [46]. In the literature, there are studies focusing on the formulation of Cu core -Ag shell NP-based inks for ink-jet printing [38,47,48]. In these inks, while Ag protects Cu from oxidation, using a Cu core significantly reduces the cost of the inks [49]. In our previous work [20], we investigated the efficacy of different functionalised Cu NP catalysts to initiate the electroless copper plating on polyester textiles. The conclusion from this work was that among different functionalising molecules, polyacrylic acid (PAA) was the most efficient one, which showed efficacy equivalent to or better than a palladium catalyst. In this study, in order to further increase the stability of Cu NP-PAA particles to oxidation, the catalyst was modified to contain Cu core -Ag shell NPs. The developed catalyst was analysed using Ultraviolet/Visible (UV/Vis) spectroscopy and Transmission Electron Microscopy (TEM). The catalyst was then deposited selectively by ink-jet printing onto a polyester textile in different printing conditions. These variable conditions included the number of printing cycles (to change the catalyst NP loading) and the printing direction. The effect of the catalyst printing conditions on the subsequent electroless copper coatings was investigated regarding their visual appearance, copper mass gain, morphology, coverage and electrical resistance besides stiffness, laundering durability and the colour fastness of the respective metallised textiles. To realise this aim, Scanning Electron Microscopy (SEM), multimeter, stiffness tester and Launder Ometer were used. Equipment For printing the catalyst, a small format thermal ink-jet BREVA printer with a printing stage of 15 cm × 15 cm was used. Standard empty HP45 cartridges were filled with the Cu-Ag NP catalyst and loaded into the printer. Ink cartridge settings were as follows: voltage of 13.3 V and pulse width of 2.7 µs. Cu-Ag NP colloids were synthesised in the following steps: (i) synthesis of Cu NPs; (ii) modification of the Cu NPs with polyacrylic acid (PAA); and (iii) formation of an Ag shell on the stabilised Cu NPs. The first two steps were carried out following the same procedure described in our previous work [20] to prepare the Cu NP-PAA colloid. Electroless Copper Step (i) included using an aqueous solution of copper-ammonia complex and copper (II) hydroxide as the copper species in addition to the aqueous hydrazine solution. In step (ii), PAA was added to the Cu NPs to functionalise them, and an equal amount of ethanol was added to the reaction mixture afterwards. After stirring the mixture for 1 h, the particles were separated from the solution and washed by water and precipitated by ethanol. The Cu NPs were then re-dispersed in water using an ultrasonic bath for 1 h. The concentration ratio of the stabilised Cu NPs and PAA in the final colloid was estimated gravimetrically as Cu NP:polyacrylate~1:1.3-1:1.5. To prepare the Ag shell in step (iii), a freshly prepared solution of 0.1 M AgNO 3 in 2.5 M NH 4 OH was slowly injected drop by drop into the stirred Cu NP-PAA colloid. Following the addition of the first drop of the Ag solution, the Ag complex was directly reduced by the Cu atoms and Ag deposited on the surface of Cu NPs. This led to a colour change from the red wine (typical to Cu NPs) to brown red and deep black (characterised to Cu-Ag NPs). Cu-Ag NP colloids were initially synthesised in different Ag concentrations of 0.25-5 at.%, and following their analysis, the one with 5 at.% concentration was selected as the catalyst for this study. Printing of Cu-Ag NP Catalyst and Electroless Plating The initial printing image designed for the characterisation of deposited electroless copper coatings is depicted in Figure 1. As it is observed, the total image size was 15 cm × 15 cm, and it included two identical rectangle patterns of 8 cm × 4 cm. This was selected in order to be able to centre the image (and therefore the samples) under the printer and simultaneously print two textile pieces in different directions. To achieve this aim, for each set of printing cycles, polyester textile was cut in two pieces of 8 cm × 4 cm in warp and weft way. In the warp way, one was cut such that the length of the sample was parallel to the selvage (so printed in the warp direction). In the weft way, one was cut so that the length of the sample was perpendicular to the selvage (so printed in the weft direction) (Figure 1). Textile samples were immersed in a Conditioner solution at 50 • C for 5 min. The solution was made up using Circuposit Conditioner 3323A according to the supplier's guidelines. They were then rinsed under running water for 5 min followed by drying in the oven at 45 • C, typically overnight. Afterwards, samples were placed on the printer stage over the pre-marked 8 cm × 4 cm rectangles. Printing was completed using the as-synthesised Cu-Ag NP catalyst at different numbers of cycles (2, 3, 4 and 5) and each set was repeated 4 times. As a result, for each set of printing cycles, printing happened in the warp direction (for the sample placed on the top) and in the weft direction (for the sample placed on the bottom), simultaneously. Subsequently, each pair of simultaneously printed samples were placed into an electroless copper electrolyte at 46 °C for 30 min. The electrolyte was made up using Circuposit 3350-1 according to the supplier's guideline. The textile samples were then rinsed under running water for 5 min followed by drying in the oven overnight. Subsequently, each pair of simultaneously printed samples were placed into an electroless copper electrolyte at 46 • C for 30 min. The electrolyte was made up using Circuposit 3350-1 according to the supplier's guideline. The textile samples were then rinsed under running water for 5 min followed by drying in the oven overnight. In order to find the appropriate Ag concentration to protect Cu NPs, optical spectra were collected using a Hitachi U-2001 UV/Vis spectrometer from the diluted Cu-Ag NP colloids in different Ag concentrations (the optical length was 10 mm). To characterise the structure, size and distribution of the NPs in the Cu-Ag NP catalyst, a TEM technique was employed. For TEM analysis, a few drops of the dispersed Cu-Ag NP catalyst were pipetted onto a holey carbon film on a 200-mesh copper grid placed on top of a filter paper. This was followed by air-drying overnight, and then the grid was placed in the FEI Talos F200X TEM instrument (Thermo Scientific, Waltham, MA, USA) equipped with a super X EDS. The operating voltage was 200 kV and Velox software (Thermo Scientific, Waltham, MA, USA) was used to obtain the data. Characterisation of Electroless Copper Coatings To measure the copper mass gain of the textiles, the mass of each piece of textile was measured once after cutting (in 8 cm × 4 cm) and then after electroless copper plating and drying in the oven overnight. The morphology of the copper coatings as well as the degree of the fibres' coverage were characterised using a ZEISS GEMINI 500 VP SEM (Jena, Germany). To have a better insight into the conductivity of the metallised patterns from application point of view and in order to compare their performance in a more practical way, electrical resistance of the patterns was measured using a multimeter. The resistance was measured on the furthest corners of each dumbbell shape (red crosses in Figure 1), and the average resistance of all three shapes was reported as the resistance of each pattern. Textile Characterisations Stiffness of metallised textiles was tested using a stiffness tester according to ASTM-D1388-96 standard [50] and measured using the cantilever principle. The sample size required for the stiffness tests was 15 × 2.5 cm. Therefore, the print image was designed by replacing the 8 cm × 4 cm rectangle areas in Figure 1 with fully black 15 cm × 2.5 cm horizontal strips. Two pre-treated textile samples were printed in the warp and weft directions simultaneously using the Cu-Ag NP catalyst at different numbers of printing cycles (2, 3, 4 and 5) followed by electroless copper plating for 30 min. Therefore, the sample printed in the warp direction was used to measure the warp way stiffness and the one printed in the weft direction was used to measure the weft way stiffness. A Shirley Stiffness Tester apparatus supplied from SDL Atlas Ltd. (Rock Hill, SC, USA) was used to measure the bending lengths of the metallised textiles. For each set of printing cycles, two readings on each side of the textile printed in the warp direction (face and back) and two readings on each side of the textile printed in the weft direction were taken as the warp way and weft way readings. The bending lengths and flexural rigidity of the samples were calculated using the Equations (1)-(3) [51]. Afterwards, the results were compared with the ones for an untreated control sample. Bending length = L/2 cm (1) where L-mean of the overhanging length of the specimen in cm where G warp -Flexural rigidity in the warp way G weft -Flexural rigidity in the weft way The metallised textiles were washed according to the ISO 105-C06 (A1S):2010 standard [52] in a Launder Ometer from SDL Atlas Ltd. (Rock Hill, SC, USA). The washing tests were performed at 40 • C for 30 min using a standard detergent (ECE soap for ISO test) and material (textile) weight to liquor ratio of 1:50. For each set of printing cycles, three identical metallised samples printed in the warp direction, and three identical metallised samples printed in the weft direction were tested in up to five washing cycles. Samples were rinsed under running water after each washing cycle and then dried using a hot air dryer, and their resistance was measured using a multimeter. During the washing tests, the standard multifibre textiles were sewed to the samples as an adjacent textile, and the colour fastness was visually assessed using Grey Scales. Characterisation of Cu-Ag NP Catalyst During the synthesis of Cu-Ag NP colloids using the method applied in this study, Ag was deposited on Cu NPs as a result of a galvanic replacement happening spontaneously after Ag ions came into contact with the Cu NPs in the solution. Subsequently, while Cu atoms were oxidised into ions and dissolved in the solution, Ag ions were reduced into atoms and deposited on the surface of the Cu NPs. The sufficiently slow injection rate of the Ag precursor into the Cu NP colloid produced a low Ag ion concentration around the Cu NPs to ensure collision and nucleation of the Ag atoms on the Cu NPs. Therefore, our method of synthesis was expected to synthesise Cu core -Ag shell NPs as the main product. UV/Vis Spectroscopy Characterization In order to determine the appropriate Ag concentration to protect Cu NPs against oxidation and to understand the architecture of the synthesised Cu-Ag NPs, UV/Vis spectroscopy was carried out on the diluted Cu-Ag NP colloids. Figure 2a shows the optical spectra of the synthesised Cu-Ag NP colloids at different Ag concentrations. The surface plasmon spectrum of the pure Cu NPs (black line) was characterised by the 565 nm copper plasmon peak only. After addition of Ag to the Cu NP colloid (red line), the absorption corresponding to silver appeared (at 443 nm). The observed absorption peak showed a red shift compared to the pure Ag NPs peak (at~405 nm [53]). As the Ag concentration in the colloid increased, the absorption increased, and the absorption peak shifted more to the red region. According to Byeon et al. [53], the red shift and broadening of the surface plasmon resonance (SPR) band represent the presence of Cu-Ag bimetallic NPs as the dominant products. On the other hand, at higher concentrations of Ag, the 565 nm copper plasmon peak disappeared. UV/Vis spectra of binary metal or alloy NPs normally show two distinct surface plasmons. However, in the case of core-shell NPs, due to the formation of shell metal on the surface of the core metal, only one single peak at a wavelength close to the SPR peak of the shell metal might be observed. Therefore, it is likely that the synthesised Cu-Ag NP colloids with an adequate Ag concentration mostly contained Cu core -Ag shell NPs. Similar results have been reported in other studies [36,37,[53][54][55]. Furthermore, the presence of symmetric peaks and the sufficiently narrow surface plasmons indicated that most particles had a spherical shape [55]. Cu-Ag NPs were more stable since, after 96 h, the surface plasmon peak corresponding to the Ag shell was still present on the optical spectrum (light blue line). The observed blue shift of the silver plasmon peak with time and the appearance of a shoulder in the copper plasmon region (Figure 2b) could be related to the segregation of Cu and Ag in particles and oxidation of the Cu core [56]. Consequently, Cu-Ag NP colloid with ~5 at.% Ag was used for the later characterisations and then as a catalyst for the electroless copper plating of textiles. TEM Characterization The synthesised Cu-Ag NP catalyst was analysed using TEM to determine its structure in addition to the size and distribution of the Cu and Ag NPs ( Figure 3). As it is One of the aims of this work was the formation of the Ag shell on Cu NPs to protect Cu NPs from air oxidation. Therefore, the oxidation behaviour of the synthesised Cu-Ag NPs was investigated for up to 96 h after their synthesis and compared with the behaviour of pure Cu NPs. Figure 2b shows the optical spectra of the synthesised pure Cu NPs and Cu-Ag NPs (with~5 at.% Ag) over different time periods. It was observed that pure Cu NPs were oxidised and dissolved within 48 h after their synthesis (red line). However, Cu-Ag NPs were more stable since, after 96 h, the surface plasmon peak corresponding to the Ag shell was still present on the optical spectrum (light blue line). The observed blue shift of the silver plasmon peak with time and the appearance of a shoulder in the copper plasmon region (Figure 2b) could be related to the segregation of Cu and Ag in particles and oxidation of the Cu core [56]. Consequently, Cu-Ag NP colloid with~5 at.% Ag was used for the later characterisations and then as a catalyst for the electroless copper plating of textiles. TEM Characterization The synthesised Cu-Ag NP catalyst was analysed using TEM to determine its structure in addition to the size and distribution of the Cu and Ag NPs ( Figure 3). As it is observed in Figure 3a,b, in the Cu-Ag NP colloid, particles were present in different sizes including small NPs of 10-20 nm and larger NPs or particle aggregates of 60-70 nm. The mapping image of Cu distribution (Figure 3c) shows that large particles were clearly visible while smaller particles had weak contrast due to either lower mass and/or a partially shielded signal by Ag. However, the Ag distribution mapping image (Figure 3d) was more complicated. The size and shape of the Ag were generally similar to those of the Cu NPs showing high contrast on smaller particles whilst it was hardly detectable on large ones. This observation can be related to the higher surface to volume ratio of the smaller Cu NPs, which facilitates the collision of Ag ions and the nucleation of the Ag atoms to form the shell. This leads to a thinner Ag shell on large Cu NPs, which might not be easily detectable in TEM, although UV/Vis spectroscopy analysis confirmed its presence. Figure 3e shows a Cu NP with an approximate size of 70 nm having an Ag shell with an inhomogeneous thickness of~1-7 nm. Thus, even at an equivalent thickness of the Ag shell, the Ag/Cu ratio drastically decreased with increasing Cu NP size. Figure 3f clearly shows the higher contrast of Ag on the smaller particles and higher contrast of Cu on the larger particles. detectable in TEM, although UV/Vis spectroscopy analysis confirmed its presence. Figure 3e shows a Cu NP with an approximate size of 70 nm having an Ag shell with an inhomogeneous thickness of ~1-7 nm. Thus, even at an equivalent thickness of the Ag shell, the Ag/Cu ratio drastically decreased with increasing Cu NP size. Figure 3f clearly shows the higher contrast of Ag on the smaller particles and higher contrast of Cu on the larger particles. The location of the plasmon absorption bands of metal NPs depends on their size and shape, whereas for the core-shell NPs, it also depends on the thickness of the shell [57]. The observed Ag plasmon peaks lying in the range of 440-470 nm corresponded to pure Ag NPs with the size of 60-80 nm according to [58], which are much larger than the ones in Figure 3d. Therefore, the observed Ag plasmon peaks were probably related to the core-shell structure rather than being due to the presence of pure Ag NPs. Visual Inspection and Copper Mass Gain Measurements Copper mass gain and appearance of the textiles after electroless plating are important indicators to compare the ability of catalysts (here different printing conditions of a catalyst) to facilitate electroless plating. In the case of our study, the aim of these analyses was to determine the effect of catalyst NP loading (different numbers of printing cycles) and printing direction on the deposited electroless copper coatings. Figure 4 shows the electroless copper mass gain of the samples, which were ink-jet printed with the Cu-Ag NP catalyst at different numbers of printing cycles and directions. As was expected, the copper mass gain increased as the number of catalyst printing cycles increased, although at higher printing cycles, the rate began to plateau. Each printing cycle introduced additional catalytic Cu and Ag NPs on the surface, which further facilitated the electroless copper plating, resulting in a higher copper mass gain. Figure 5 shows the appearance of the textiles (which had been ink-jet printed with the Cu-Ag NP catalyst at different numbers of cycles and directions) after electroless copper plating. The appearance of the samples was in agreement with the copper mass gain results verifying the higher amount of the electroless copper deposit over the patterns with increasing the number of printing cycles of the catalyst. Copper mass gain and appearance of the textiles after electroless plating are important indicators to compare the ability of catalysts (here different printing conditions of a catalyst) to facilitate electroless plating. In the case of our study, the aim of these analyses was to determine the effect of catalyst NP loading (different numbers of printing cycles) and printing direction on the deposited electroless copper coatings. Figure 4 shows the electroless copper mass gain of the samples, which were ink-jet printed with the Cu-Ag NP catalyst at different numbers of printing cycles and directions. As was expected, the copper mass gain increased as the number of catalyst printing cycles increased, although at higher printing cycles, the rate began to plateau. Each printing cycle introduced additional catalytic Cu and Ag NPs on the surface, which further facilitated the electroless copper plating, resulting in a higher copper mass gain. Figure 5 shows the appearance of the textiles (which had been ink-jet printed with the Cu-Ag NP catalyst at different numbers of cycles and directions) after electroless copper plating. The appearance of the samples was in agreement with the copper mass gain results verifying the higher amount of the electroless copper deposit over the patterns with increasing the number of printing cycles of the catalyst. In ink-jet printing of a catalyst on textiles, metal NP loading is one of the most important parameters influencing different properties of the final metallised patterns. In the case of underloading, as the catalyst particles do not locate close enough to each other, a continuous electroless copper coating will not grow (Figure 5a,b). In addition, underloading of the catalyst may result in its shallower absorption by fibres. As a result, the subsequent electroless copper coating will only cover the surface rather than covering the fibres In ink-jet printing of a catalyst on textiles, metal NP loading is one of the most important parameters influencing different properties of the final metallised patterns. In the case of underloading, as the catalyst particles do not locate close enough to each other, a continuous electroless copper coating will not grow (Figure 5a,b). In addition, underloading of the catalyst may result in its shallower absorption by fibres. As a result, the subsequent electroless copper coating will only cover the surface rather than covering the fibres within the fabric. At the other extreme, overloading of a catalyst can lead to low adhesion of the deposited electroless copper coating on the one hand [45] and an excessive level of ink spreading (bleeding) on the other hand, resulting in poor pattern resolution. Regarding the effect of printing direction, the copper mass gain of the samples printed with a catalyst at the same number of cycles in different directions was comparable after electroless plating (Figure 4). This result strongly suggests that printing direction of the catalyst has very little effect on the electroless copper mass gain. However, a very interesting finding of the visual inspection ( Figure 5) concerned the spreading behaviour of the electroless copper coatings and its dependence on the printing direction. In both printing directions, as the number of printing cycles increased, spreading became more noticeable. On the other hand, at the same number of printing cycles, spreading was generally more significant on the textile printed in the weft direction. The metallised patterns printed with the catalyst in the warp direction had mainly sharper edges, and the dimensions of the patterns were closer to the original sizes. Similar results have been observed by Hajipour et al. [59] who investigated the effect of weave structure of the polyester fabric on the quality of ink-jet printing using a water-based ink. For the patterns printed in the warp direction, ink was diffused in the weft direction, and the density of the weft yarns floating over the warp yarns determined the ink spreading. On the other hand, when the patterns were printed in the weft direction, diffusion of the ink happened in the warp direction. They therefore concluded that it was the density of the warp yarns floating over the weft yarns, which controlled the ink spreading. Consequently, it is obvious that this type of ink spreading, which is perpendicular to the printing direction, occurs in both cases. However, in the case of the polyester textile used in this study, the density of the warp yarns was more than twice the density of the weft yarn (69 vs. 31/cm). Therefore, the ink spread more intensely perpendicular to the printing direction for the patterns printed in the weft direction. For the patterns printed in the warp direction, a grey shadow of ink spreading was observed in the weft direction (black arrows in Figure 5i) compared to the electroless copper plated perpendicular spreading (in the warp direction) on patterns printed in the weft direction (orange arrows in Figure 5j). The reason was probably the lower amount of catalytic ink spreading in the weft direction for the patterns printed in the warp direction, and this amount of the catalyst was not adequate to initiate the electroless copper plating over the spread area. It is worth mentioning that not all the spreading happened perpendicular to the printing direction, and some ink spread in the printing direction. This type of spreading was more obvious on patterns printed in the warp direction as a result of the higher density of the warp yarns. This resulted in the copper plated spreading in the printing direction for the patterns printed in the warp direction (orange arrows in Figure 5i) compared to the grey spreading observed in the printing direction of the samples printed in the weft direction (black arrows in Figure 5j). Since our designed patterns were longitudinal, the area where the ink spread perpendicular to the printing direction was larger compared to the area where the ink spreading happened in the printing direction (36 cm vs. 6 cm). Therefore, in our case, it was the perpendicular spreading which controlled the total ink spreading resulting in a higher degree of the electroless copper coating's spreading outwards of the patterns printed in the weft direction. In summary, ink spreading can be minimised by a careful selection of the textile's structure and the printing conditions including printing design, printing direction and the number of printing cycles. This is important, particularly when printing circuitry to avoid short circuits. As can be seen in Figure 5c, the ink spreading was very low for the textile sample printed with the catalyst at two cycles in the warp direction. According to this figure, the width of the narrowest lines was almost identical to the original size on the printing image (2 mm). Therefore, lines as thin as 2 mm can be definitely printed and plated using this technique. Below 2 mm might also be possible although it was not tested in this study. Electrical Resistance Measurements Electrical resistance of the electroless copper coatings on the polyester textiles printed with the Cu-Ag NP catalyst at different numbers of cycles and directions was measured using a multimeter. The results are depicted in Figure 6. The conductivity of the metallised patterns depends on the thickness, degree of coverage and depth of the electroless copper coatings. As can be observed in Figure 6, in both printing directions, resistance decreased as the number of printing cycles increased. This result was in agreement with the higher electroless copper mass gain for the increased number of printing cycles of the catalyst in both printing directions. On the other hand, at the same number of printing cycles, the resistance of the metallised pattern printed with the catalyst in the weft direction was higher than the one for the pattern printed in the warp direction. This happened even though both those samples had similar electroless copper mass gain ( Figure 4). This observation was probably attributed to the higher overall degree of spreading for the metallised patterns printed with the catalyst in the weft direction ( Figure 5) resulting in the lower chance of fibre coverage by electroless copper coatings (i.e., the same amount of deposit over a larger area). This result showed the important role of the catalytic ink spreading on the conductivity of the metallised patterns. According to Figure 6, the difference between the resistances of the electroless copper coatings printed with the catalyst at the same number of cycles and different directions decreased as the number of printing cycles increased. The metallised patterns which were printed with the catalyst at five cycles had almost identical resistance when printed in the warp and weft directions. At higher numbers of printing cycles, the catalytic ink had already saturated most of the fibres. Thus, spreading did not affect the fibres coverage by the electroless copper coating, and the subsequent conductivity was therefore similar. Polymers 2022, 14, x FOR PEER REVIEW 12 of 24 metallised patterns printed with the catalyst in the weft direction ( Figure 5) resulting in the lower chance of fibre coverage by electroless copper coatings (i.e., the same amount of deposit over a larger area). This result showed the important role of the catalytic ink spreading on the conductivity of the metallised patterns. According to Figure 6, the difference between the resistances of the electroless copper coatings printed with the catalyst at the same number of cycles and different directions decreased as the number of printing cycles increased. The metallised patterns which were printed with the catalyst at five cycles had almost identical resistance when printed in the warp and weft directions. At higher numbers of printing cycles, the catalytic ink had already saturated most of the fibres. Thus, spreading did not affect the fibres coverage by the electroless copper coating, and the subsequent conductivity was therefore similar. SEM Characterization The metallised textiles printed with the Cu-Ag NP catalyst at different numbers of printing cycles and directions were characterised using SEM to study the coatings' morphology and the level of the fibres' coverage ( Figure 7). Figure 7 shows that at the lower number of printing cycles, the electroless copper coatings had a nodular morphology (Figure 7a-d). However, with increasing the number of printing cycles, the level of nodularity of the coatings decreased, and they started to become smoother (Figure 7g,h). This was the result of an increased number of catalytic NPs on the surface at higher numbers of printing cycles leading to more nucleation sites. Consequently, electroless copper coatings were deposited which were more continuous and had a more uniform thickness. As it is observed in Figure 7, it was very difficult to realise any difference between the SEM images of the electroless copper coatings printed with the catalyst at the same number of cycles and different directions. Since the SEM micrographs were taken locally and at high SEM Characterization The metallised textiles printed with the Cu-Ag NP catalyst at different numbers of printing cycles and directions were characterised using SEM to study the coatings' morphology and the level of the fibres' coverage ( Figure 7). Figure 7 shows that at the lower number of printing cycles, the electroless copper coatings had a nodular morphology (Figure 7a-d). However, with increasing the number of printing cycles, the level of nodularity of the coatings decreased, and they started to become smoother (Figure 7g,h). This was the result of an increased number of catalytic NPs on the surface at higher numbers of printing cycles leading to more nucleation sites. Consequently, electroless copper coatings were deposited which were more continuous and had a more uniform thickness. As it is observed in Figure 7, it was very difficult to realise any difference between the SEM images of the electroless copper coatings printed with the catalyst at the same number of cycles and different directions. Since the SEM micrographs were taken locally and at high magnification, it was nearly impossible to see any effect of coatings' spreading and distinguish the resultant difference in the level of fibres coverage by the electroless copper coatings. Textile Characterisations As the first step to further characterise the structure of the textile being used in this paper, the untreated white polyester textile (the control sample) was examined under a magnifying counting glass (Figure 8a). The textile's design was confirmed as a plain weave of 1/1 having finer warp yarns with 69 EPC and the coarser weft yarns with 30 PPC. Figure 8b shows the optical microscopy image of the metallised textile indicating the warp and weft yarns. Figure 8c depicts the optical microscopy image of the border between the metallised section and the non-metallised section, which gives some indication of the resolution which can be achieved for the electroless copper plating after selective catalysation of the textiles. Textile Characterisations As the first step to further characterise the structure of the textile being used in this paper, the untreated white polyester textile (the control sample) was examined under a magnifying counting glass (Figure 8a). The textile's design was confirmed as a plain weave of 1/1 having finer warp yarns with 69 EPC and the coarser weft yarns with 30 PPC. Figure 8b shows the optical microscopy image of the metallised textile indicating the warp and weft yarns. Figure 8c depicts the optical microscopy image of the border between the metallised section and the non-metallised section, which gives some indication of the resolution which can be achieved for the electroless copper plating after selective catalysation of the textiles. Fabric Stiffness Test Selectively metallised textiles should be multifunctional and comfortable to wear to be able to serve as wearable electronics for different purposes. Stiffness is one of the most important characteristics of the textiles, which influences their flexibility, drape and handle properties. The bending resistance of the textiles is an indicator of their stiffness. To determine this, bending lengths of the metallised textiles printed with the Cu-Ag NP catalyst at different numbers of printing cycles and directions were measured using a stiffness tester. The results were compared with the ones for a control sample and are depicted in Figure 9. As it can be seen, the control sample had the same bending lengths on the face and back side of the fabric and a lower bending length for the weft way sample compared to the warp way one ( Figure 9). As mentioned before, there was a difference in the warp way (finer yarns with 69 EPC) and the weft way (coarser yarns with 30 PPC) of the textile causing a difference in cover factor. Cover factor is one of the main parameters which affects the bending characteristics of the textiles. Stiffness of the textiles is mainly affected by the warp, weft, and overall fabric cover factor. In Figure 8a,b, the bigger gaps between the weft yarns compared to the smaller gaps between the warp yarns can clearly be seen, showing that the weft cover factor was less than the warp cover factor. This was the reason for the lower bending lengths on the face and back side of all the weft way metallised samples compared to the warp way ones, independent of the number of printing cycles Fabric Stiffness Test Selectively metallised textiles should be multifunctional and comfortable to wear to be able to serve as wearable electronics for different purposes. Stiffness is one of the most important characteristics of the textiles, which influences their flexibility, drape and handle properties. The bending resistance of the textiles is an indicator of their stiffness. To determine this, bending lengths of the metallised textiles printed with the Cu-Ag NP catalyst at different numbers of printing cycles and directions were measured using a stiffness tester. The results were compared with the ones for a control sample and are depicted in Figure 9. As it can be seen, the control sample had the same bending lengths on the face and back side of the fabric and a lower bending length for the weft way sample compared to the warp way one ( Figure 9). As mentioned before, there was a difference in the warp way (finer yarns with 69 EPC) and the weft way (coarser yarns with 30 PPC) of the textile causing a difference in cover factor. Cover factor is one of the main parameters which affects the bending characteristics of the textiles. Stiffness of the textiles is mainly affected by the warp, weft, and overall fabric cover factor. In Figure 8a,b, the bigger gaps between the weft yarns compared to the smaller gaps between the warp yarns can clearly be seen, showing that the weft cover factor was less than the warp cover factor. This was the reason for the lower bending lengths on the face and back side of all the weft way metallised samples compared to the warp way ones, independent of the number of printing cycles of the catalyst. Figure 9 also shows that the bending lengths of the metallised samples printed with the catalyst in the warp direction (warp way bending lengths) were higher at 2-3 printing cycles with a big difference between the face and back sides. However, the bending lengths dropped at 4-5 printing cycles and became very similar on the face and back sides. It seems that at 2-3 printing cycles, most of the printed ink sat on the face side of the textiles (due to the higher cover factor) resulting in higher electroless copper deposit and higher bending lengths on the face side and a significant difference between the face and back sides. However, as they went through the higher printing cycles (4)(5), the higher amount of the printed ink was enough for its deeper absorption from the face side to the back side of the textiles. This led to the higher saturation of the internal fibres by the catalytic ink followed by electroless copper plating on the back side of the textiles leading to similar bending lengths on both sides. Although polyester is hydrophobic in nature, the catalytic ink could penetrate to the back side of the fabric by capillary forces. Furthermore, for most of the number of printing cycles, there was no difference in the bending lengths on the face and back side of the metallised samples printed with catalyst in the weft direction (weft way bending lengths). This observation may be due to the lower weft cover factor leading to an easier migration of the ink from the face side to the back side of the textiles and similar level of electroless copper deposit on both sides. The values of flexural rigidity and the overall flexural rigidity of the metallised textiles printed with the catalyst at different numbers of cycles were calculated using Equations (2) and (3) in Section 2.4.3 and are plotted in Figures 10 and 11, respectively. Figure 11 indicates that the overall flexural rigidity (stiffness) of the textiles was higher on the face side compared to the back side for each number of printing cycles and correspondingly increased with increasing the number of printing cycles on both face and back side. As was expected from the bending lengths results, the difference between the overall flexural rigidity of the face and back side of the metallised samples reduced at the higher number of printing cycles of the catalyst. For example, while the difference in the overall flexural rigidity of the face and back side of the metallised samples printed with catalyst at two cycles was 4 mg·cm, this difference was only 0.48 mg·cm for the metallised samples printed with catalyst at five cycles. This results in the electroless copper coating the printed face side of the textile and some of the internal fibres creating a coated conductive back side as well. For clarity, the images of the back side of the metallised textiles printed with the catalyst at different numbers of cycles and directions are shown in Figure 12. It can be observed that the increased number of printing cycles of the catalyst resulted in a more complete copper coating on the back side of the samples in both printing directions. The visible electroless copper coatings with high conductivities on the back side of the textiles together with an almost identical overall flexural rigidity of the face and back side of the metallised textiles (but low enough for comfort) are useful indicators to identify the successful printing conditions required to achieve conductive e-textiles with high performance. For the Cu-Ag NP catalyst used in this paper, from Figure 11, five printing cycles seemed to be the optimum to obtain the mentioned characteristics on the polyester textile used in this study. Laundering Durability and Fastness Test (Launder Ometer Test) The metallised textiles printed with the Cu-Ag NP catalyst at different conditions were washed at different numbers of cycles. After each washing cycle, the electrical resistance on the face and back side of the metallised samples were measured using a multimeter. The aim was to observe the changes in their electrical resistance as well as the colour fastness to washing following each washing cycle. The results are shown in Figure 13 and Figure 14 for the metallised samples printed with the Cu-Ag NP catalyst in the warp and weft directions, respectively. Figure 13 indicates that the resistance values of the metallised patterns printed with the catalyst in the warp direction increased significantly with increasing the number of washing cycles, at different numbers of printing cycles. The same trend was also observed for the metallised samples printed with the catalyst in the weft direction ( Figure 14). As Figure 13 shows, the resistance of the metallised textile printed with the catalyst at two cycles in the warp direction increased significantly at the face side of the fabric from 1.3 Ω to 8.3 Ω after one washing cycle. After that, the sample became non-conductive. The metallised sample printed at three cycles became non-conductive after three washing cycles, while the one printed at four cycles became non-conductive after four washing cycles. After the initial washing cycles, the resistance values increased moderately due to the removal of the unfixed copper particles from the Laundering Durability and Fastness Test (Launder Ometer Test) The metallised textiles printed with the Cu-Ag NP catalyst at different conditions were washed at different numbers of cycles. After each washing cycle, the electrical resistance on the face and back side of the metallised samples were measured using a multimeter. The aim was to observe the changes in their electrical resistance as well as the colour fastness to washing following each washing cycle. The results are shown in Figures 13 and 14 for the metallised samples printed with the Cu-Ag NP catalyst in the warp and weft directions, respectively. Figure 13 indicates that the resistance values of the metallised patterns printed with the catalyst in the warp direction increased significantly with increasing the number of washing cycles, at different numbers of printing cycles. The same trend was also observed for the metallised samples printed with the catalyst in the weft direction ( Figure 14). As Figure 13 shows, the resistance of the metallised textile printed with the catalyst at two cycles in the warp direction increased significantly at the face side of the fabric from 1.3 Ω to 8.3 Ω after one washing cycle. After that, the sample became non-conductive. The metallised sample printed at three cycles became non-conductive after three washing cycles, while the one printed at four cycles became non-conductive after four washing cycles. After the initial washing cycles, the resistance values increased moderately due to the removal of the unfixed copper particles from the textiles surface [60]. After that, the resistance values increased more significantly as a result of the repeated impact of the increased mechanical stress, water stress, temperature, and the influence of detergents [61]. As expected, in both printing directions, the resistance values were slightly higher on the back side of the metallised textiles when compared to their face side independent of the number of printing and washing cycles (Figures 13 and 14). For the same number of printing and washing cycles, the metallised samples printed with the catalyst in the weft direction have shown higher resistance values compared to the ones printed in the warp direction. This was due to the initial higher resistance of the metallised samples printed with the catalyst in the weft direction compared to the ones printed in the warp direction for all number of printing cycles, before any washing test, as it was described in Section 3.2.2. The laundering durability results confirmed the necessity of protecting the conductive tracks (the metallised surface of the textiles) in practical applications, against corrosion and wear damage resulting from the water stress, temperature, and mechanical stress of washing cycles. There are a variety of ways of protecting the electroless copper deposit including coating it with a polymer layer. Acrylate resin, polyurethane resin and polydimethylsiloxane have been proved to protect the copper plated tracks on the polyester textiles from wear resistance without affecting the track's conductivity [62]. The ISO standard Grey Scales were used to visually assess the colour change on the washed samples and the colour staining on the white multifibre fabrics from the metallised samples after each washing cycle. The results are listed in Table 1. It was observed that independent of the number of cycles and direction of the catalyst printing or the number of washing cycles, there was no significant difference in the colour staining of all the metallised samples. The colour staining grade was 4/5, which was considered as Good. This indicated that there was only a slight staining on the white multifibre fabrics, which was acceptable. The source of this change in colour was probably due to the presence of metallic copper particles on the metallised samples. However, these particles did not adhere to the surface of the white multifibre fabrics. On the other hand, the lowest grade of the colour change (shade change) on the metallised samples at the increased number of washing cycles was three. This may be due to the influence of the detergent on the copper particles, water, and mechanical force during the washing cycles. Conclusions In this study, a functionalised the Cu-Ag NP catalyst was successfully synthesised and used as an ink-jet printing ink for the selective catalysation of the polyester textiles. The presence of a core-shell structure in the synthesised catalyst was proven to protect copper from oxidation. Different printing conditions were applied during ink-jet printing of the catalytic ink including varying the number of printing cycles and printing directions to see their effect on the final properties of the metallised textiles. Increasing the number of printing cycles enhanced the electroless copper mass gain in both printing directions. Although the textiles printed with the catalyst at the same number of printing cycles had similar electroless copper mass gain, the conductivities were lower on the ones printed in the weft direction, especially at lower numbers of printing cycles. This was probably due to the textile's structure leading to higher total ink spreading on textiles printed with the catalyst in the weft direction. It was found that ink spreading can be minimised by careful selection of the textile's structure and printing conditions including printing design, printing direction and number of printing cycles. Due to the lower weft cover factor compared to the warp one, stiffness of the metallised samples was lower on the weft way at the same number of printing cycles. As a lower weft cover factor leads to the deeper absorption of the catalytic ink by fibres, the weft way stiffness was similar on the face and back side of the metallised samples regardless of the number of printing cycles. On the other hand, for the warp way metallised samples, stiffness was very different between the face and back side of the metallised samples at the lower number of printing cycles. However, it became similar at the higher printing cycles. Independent of the printing direction, metallised samples printed with the catalyst at five cycles had highly conductive, complete electroless copper coatings on their back side. For all numbers of printing cycles and directions, conductivity of the metallised samples decreased after each washing cycle. While the decrease was moderate for the initial washing cycles, it became more significant for the higher number of washing cycles. The colour staining grade was considered as Good (grade 4/5) for all the metallised samples while the lowest grade of the colour change only occurred at the increased number of washing cycles (i.e., three). For the first time, the results of this study indicate the importance of understanding the textiles' structure including the yarns thickness, density and the textiles cover factor in the warp and weft ways and how these influence the properties of the metallised textiles, such as conductivity, stiffness and laundering durability. Therefore, for each specific textile, printing conditions should be optimised to deposit the catalyst in such a way that can provide a metallised pattern with the desired properties to be suitable to serve as e-textiles.
12,625.2
2022-08-25T00:00:00.000
[ "Materials Science", "Engineering" ]
Whirling orbits around twirling black holes from conformal symmetry Dynamics in the throat of rapidly rotating Kerr black holes is governed by an emergent near-horizon conformal symmetry. The throat contains unstable circular orbits at radii extending from the ISCO down to the light ring. We show that they are related by conformal transformations to physical plunges and osculating trajectories. These orbits have angular momentum arbitrarily higher than that of ISCO. Using the conformal symmetry we compute analytically the radiation produced by the physical orbits. We also present a simple formula for the full self-force on such trajectories in terms of the self-force on circular orbits. JHEP03(2017)014 paper we consider orbits with angular momentum arbitrarily higher than that of ISCO. 1 We first solve for a 1-parameter family of unstable equatorial circular orbits, labeled by their angular momenta, whose radii extend from the ISCO down to the light ring. We then find a 1-parameter family of conformal mappings that transform them, altogether, to a 2-parameter family of physical orbits, labeled by their energy and angular momentum. These are either plunges or grazing "zoom-whirl" orbits. They include plunges that naively (i.e. neglecting backreaction) overspin the BH beyond extremality [21][22][23]. For all physical orbits we solve for the corresponding field profile including the observed waveform at future null infinity. Finally, we discuss an application of our results to the study of the self-force, arguing that the self-force on any of the 2-parameter family of physical orbits may also be obtained via the above-mentioned coordinate transformation from the much simpler case of the circular orbit. The rest of the paper is organized as follows. In section 2 we introduce a family of transformations that map unstable circular orbits in the near-horizon geometry of nearextreme Kerr to generic 2-parameter orbits with angular momentum higher than that of ISCO. In section 3 we use these mappings to solve for the field emitted by such orbits, including the observed radiation at future null infinity. In section 4 we argue that the full self-force for the generic orbits -not only the radiative part -is given via our mappings from the simpler circular orbit case by a compact analytic formula. The appendix contains details of the derivation of the conformal mappings employed in this paper. Trajectories & mapping In Boyer-Lindquist coordinates, the Kerr metric describing a BH with mass M and angular momentum J = aM is given by (G = c = 1) where ∆ =r 2 − 2Mr + a 2 ,ρ 2 =r 2 + a 2 cos 2 θ . (2. 2) The horizons are at r ± = M ± √ M 2 − a 2 . The angular momentum of Kerr BHs is bounded from above by the extremal value a = M . Close to extremality the near-horizon dynamics is greatly simplified by the presence of conformal symmetry [6,7]. The limit is best explored in Bardeen-Horowitz coordinates Parameterizing the deviation from extremality by JHEP03(2017)014 the near-horizon metric at R ∼ κ reads This metric, which describes the near-horizon geometry of near-extreme Kerr, is referred to as near-NHEK [24] and it solves the Einstein equation on its own. The near-horizon geometry of extreme Kerr, referred to as NHEK, is given by [6] This metric solves the Einstein equation on its own as well. Note that NHEK also describes the near-horizon geometry at κ ≪ R ≪ 1 in the throat of near-extreme Kerr (see e.g. appendix A in [11]). NHEK and near-NHEK are different patches in the global-NHEK space-time [6] (2.8) The isometry group is SL(2, R) × U(1) with time translations being part of the SL(2, R). In the near-NHEK geometry (2.5) there exist equatorial circular orbits at any radius above the light ring Their trajectories are given by and they carry near-NHEK energy and angular momentum (2.12) These circular orbits are all unstable (see e.g. appendix B in [9]). Therefore, naively, one might think that they are irrelevant for realistic physical situations. This is not the case. Consider the following conformal transformations JHEP03(2017)014 where χ is a constant. As explained in the appendix, these transformations may be thought of as aT →T − χ translation followed by a τ → τ − π/2 translation. The transformations (2.13) leave the near-NHEK metric invariant, but map the trajectory (2.10) to These are a 2-parameter family of near-NHEK trajectories which carry near-NHEK energy and angular momentum 2 For χ > −1 the trajectories (2.15) are plunges which start from the near-NHEK boundary at t = 0 and fall into the future horizon. For χ < −1 they are osculating orbits which penetrate the throat at the near-NHEK boundary at t = 0 and then exit it at some later finite time. This is illustrated in figure 1. Radiation field Consider a point-like scalar-charged object on a geodesic x * (τ ) which couples to a massless scalar field Ψ. The system is described by the action where λ is a coupling constant and In each case the asymptotically flat region of Kerr is to be attached to the shaded wedge. The wedge bounded by R = 0 and R = ∞ is a near-NHEK patch in coordinates (2.5). The wedge bounded by r = 0 and r = ∞ is a near-NHEK patch in coordinates (2.14). The line at R = R 0 is a circular orbit in (2.5) which in (2.14) is seen as a plunging orbit in (a) and (b), or as an osculating orbit in (c). We are interested in solving for the radiation field produced by an object on the physical orbits (2.15). We may do so analytically by solving first the much simpler case corresponding to the circular orbits (2.10) and then transforming the solution according to (2.13). For the circular orbits (2.10) in the near-NHEK patch (2.5), decomposing the scalar field as with and substituting into the equations of motion for (3.1) one finds 2M S ℓ (π/2). The solutions S(θ) of the (extremal) spheroidal wave equation (3.5) and their associated eigenvalues K are well known (e.g. they are built in Mathematica as SpheroidalPS and SpheroidalEigenvalue). The solution to (3.4) is a Green's JHEP03(2017)014 function constructed from homogeneous solutions properly matched at R = R 0 . A particularly useful basis of homogeneous solutions for near-NHEK physics is [11] R in obeys ingoing boundary conditions at the horizon, R in (R → 0) = R − im 2 (Ω/κ+1) , and R N obeys Neumann boundary conditions at the near-NHEK boundary, R N (R ≫ κ) = R −h . The solution to equation (3.4) with these boundary conditions is given by where R < and R > are the lesser and greater of R 0 and R, respectively, and Physical orbits The solution for the physical orbits (2.15) in the near-NHEK patch (2.14) is obtained analytically by applying the transformation (2.13) on the above solution for the circular orbit. This yields the near-NHEK solution with causal boundary conditions at the horizon r = 0 and Neumann at r ≫ κ. Indeed, for r ≫ κ and fixed t, φ the transformation (2.13) reduces to so that plugging into (3.8) and (3.3) we obtain, for t > 0, where In order to find the radiation field at asymptotically flat infinity one needs to match this near-NHEK solution to a solution in the far Kerr geometry. This may be done using the method of matched asymptotic expansions (MAE). For a detailed presentation of the JHEP03(2017)014 method applied to a different but similar solution in near-NHEK we refer the reader to section 4.4 in [9]. Here we will only review the general idea and necessary definitions and give the final result for the orbits (2.15) considered in this paper. Consider a scalar field on the full near-extreme Kerr geometry in Boyer-Lindquist coordinates and expand in modes withŜ ℓ (θ) the standard Kerr spheroidal harmonics. Identify the near-NHEK geometry which contains the orbits studied in this paper as the region given by r = (r − r + )/r + ≪ 1. Let the dimensionless Hawking temperature and rescaled near-superradiant frequency be with Ω H = a/(2M r + ) the horizon angular velocity. 3 The solution for the field in the far Kerr region, r ≫ max(τ H , nτ H ), with no incoming radiation from past null infinity is given by equation (4.14) in [9]. Identifying ω = (n − m)κ the near-NHEK solution may be matched to the far Kerr solution in the overlap region max(τ H , nτ H ) ≪ r ≪ 1. It should be noted that matching to the purely outgoing solution at null infinity requires modifying the Neumann boundary condition on the near-NHEK solution to so called "leaky" boundary conditions [8]. These are such that at the boundary of near-NHEK, which is in the matching region, we have the appropriate ratio of Dirichlet to Neumann modes that ensures the correct amount of radiation leaks through the near-NHEK boundary and reaches future null infinity. In the end, the result for the waveform at future null infinity, for m > 0, is given by: Self-force In this section we propose another application of the near-extreme conformal symmetry: computation of the self-force (SF) on generic equatorial orbits in the near-horizon region. We deal with the scalar case, as in throughout this paper, but a generalization to gravity should be possible along lines similar to those taken in [9]. In the gravitational case, reconstruction of the metric is required to find the full SF (see [25] for recent progress). JHEP03(2017)014 The scalar SF is given by where Ψ R is the regular piece of the field as defined by Detweiler and Whiting [26]. The transformation (2.13) is a diffeomorphism: locally, it is just a change of coordinates. Since the singular-regular decomposition is defined locally and is insensitive to changes in the global spacetime structure, the same decomposition will hold after performing (2.13). Now, the quantity F (SF ) a transforms to the plunge coordinates as a proper vector: where X A stands for the coordinates in (2.5) and x a stands for the coordinates in (2.14). X A (x a ) is given by (2.13). This gives a remarkably simple formula for the SF on generic near-horizon orbits from the circular SF. It is important to note that, to leading order in the deviation from extremality, the SF computation is insensitive to the boundary conditions in the asymptotically flat region. 4 It is possible, therefore, to use the solution (3.8) for SF computations without needing to worry about matching as in (3.15). It will be interesting to numerically test our formula (4.2) for the self-force. A Conformal mappings of near-NHEK In this appendix we give some details in relation to the conformal transformation (2.13) employed in this paper in order to map unstable circular orbits in near-NHEK to physical plunges and osculating orbits. In particular, we show how (2.13) may be obtained by composing a NHEK time translationT →T −χ with a global time translation τ → τ −π/2. R =r + χe κt r(r + 2κ) , 4 Apart from certain fine-tuned boundary conditions that eliminate the Neumann mode at the boundary. Note that the transformation of the circular orbit (2.10) in (2.5) gives rise to orbits in (A.1) which are qualitatively very similar to the ones studied in the paper. The only difference is that these orbits enter the throat in the infinite near-NHEK past (t = −∞). In the terminology of [10] these are "slow" plunging or osculating orbits. Note that the transformation (A.3) corresponds to the χ = 0 case of (2.13). The transformation ( Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
2,769.2
2017-03-01T00:00:00.000
[ "Physics" ]
Large-scale nanoporous metal-coated silica aerogels for high SERS effect improvement We investigate the optical properties and surface-enhanced Raman scattering (SERS) characteristics of metal-coated silica aerogels. Silica aerogels were fabricated by easily scalable sol-gel and supercritical drying processes. Metallic nanogaps were formed on the top surface of the nanoporous silica network by controlling the thickness of the metal layer. The optimized metallic nanogap structure enabled strong confinement of light inside the gaps, which is a suitable property for SERS effect. We experimentally evaluated the SERS enhancement factor with the use of benzenethiol as a probe molecule. The enhancement factor reached 7.9 × 107 when molecules were adsorbed on the surface of the 30 nm silver-coated aerogel. We also theoretically investigated the electric field distribution dependence on the structural geometry and substrate indices. On the basis of FDTD simulations, we concluded that the electric field was highly amplified in the vicinity of the target analyte owing to a combination of the aerogel’s ultralow refractive index and the high-density metallic nanogaps. The aerogel substrate with metallic nanogaps shows great potential for use as an inexpensive, highly sensitive SERS platform to detect environmental and biological target molecules. Recently, low-cost, large-area plasmonic sensing platforms have been intensively studied for high-throughput chemical and biological analysis. Although the electric field distribution is considerably changed by the substrate index, the importance of substrate refractive indices has been overlooked in previous studies. For example, the light-matter interaction volume in nanoplasmonic sensing systems is substantially changed by substrate indices. In the refractive index sensing of the surrounding medium and detection of the surface absorbed biological molecules, sensitivities are improved on low index substrates (Teflon, n ~ 1.3) as a consequence of greater pushing of the electric field towards the sensing region compared with that of higher index substrates (TiO 2 , n ~ 2.4) 1,2 . Surface-enhanced Raman scattering (SERS) sensing, based on cubic metal nanoparticles, also benefits from lower index substrates such as glass (n ~ 1.5) 3,4 and flower-like alumina-coated etched aluminum foil (n ~ 1.4) 5 . Plasmonic nanostructures used in optical sensing support quite different electromagnetic mode profiles. Plasmonic index sensing platforms generate an anti-symmetric mode across a thin metal film 1 , whereas nanocube-based SERS platforms support symmetric modes to enhance the far-field signal 3,4 . One approach is to use a low-refractive index material as a substrate to increase the electric field intensity at the sensing area. Numerous studies have been performed to develop low refractive index materials (n = 1.1-1.3) including porous silica, sponge-like block copolymers, glancing angle deposited nanowires 6 . Among low refractive index solid-phase materials, silica aerogels have the lowest refractive index (less than 1.01) and high optical transparency over 90% 7 . Silica aerogels, a porous bulk material made up of nanometer-scale silica chain networks, are also uniquely lightweight and superb thermal insulators. The porosity of these materials exceeds 98.5% and their thermal conductivity is as low as 0.25 Wm −1 K −18-10 . A number of industrial and aerospace applications have been demonstrated, including supercapacitors, fuel storage, catalysts, acoustic impedance matching transducers, cosmic dust collectors, and thermal insulating materials used on the space shuttle 11 . The optical properties of silica aerogels have been extensively studied by absorption spectroscopy 12,13 and photoluminescence in a powdered form 14 . The behavior of silica aerogels as quantum yield enhancers 15 has also received attention together with their Raman scattering phenomena [16][17][18][19][20][21][22] . However, the use of silica aerogels in optical sensing platforms has not yet been considered despite the unique optical properties of these materials. Here, we apply a metal-coated silica aerogel as a SERS template, to obtain high SERS effect improvement. Because the averaged electromagnetic energy density in a material is proportional to the material's dielectric permittivity and its dispersion 23,24 , the electric field inside the silica aerogel could be smaller than that of other higher refractive index materials. When a silica aerogel is attached to a semi-infinite metal, most of the electric field concentrates at the interface between the aerogel and metal with a long plasmon fringing field depth toward the silica aerogel. If the metal layer is sufficiently thin, for example, thickness λ  t (wavelength), the electric field is strongly confined at the metal layer with symmetric or anti-symmetric modes of the surface plasmons and a fringing electric field depth stretched both toward the air and silica aerogel. Considering the law of energy conservation, the extruded electromagnetic energy of the silica aerogel transferred to the other side, i.e., the air/metal interface where analyte molecules are located, as described in Fig. 1a. Therefore, if arranged correctly, it is possible to achieve a localized and enhanced electric field at the location of the target molecule. In this study, we fabricated a high-density metal nanogap structure on ultra-low index substrates by a low-cost, simple, and scalable manufacturing process, for potential use as a SERS template. We theoretically and experimentally studied how the ultralow index and nanoscale roughness of the silica aerogels affected the electric field distribution and improved SERS sensitivity. We found that the SERS enhancement factor for benzenethiol exceeded . × 7 9 10 7 owing to optimization of the density and size of the metallic nanogaps. Experimental Aerogel and sample fabrication. Figure 1a shows a schematic cross-section of the metal-coated silica aerogel with adsorbed benzenethiol molecules. A schematic diagram of the fabrication process for the silica aerogel and SERS template is shown in Fig. 1b. We placed 0.5 g of urea, 4.76 g of methyltrimethoxysilane (MTMS), 1.1 g of nonionic surfactant poly(ethylene oxide)-blockpoly(propylene oxide)-block-poly(ethylene oxide) triblock copolymer (Pluronic F127) in 7.0 g of 10 mM acetic acid solution 7 . The mixture was stirred at room temperature for 30 min, then slowly poured into a Petri dish. The Petri dish was covered and placed in an oven at 60 °C for 2 days to form a solid gel. The gel was immersed in distilled water for 1 day to eliminate residual chemicals held within the silica chain network. To exchange the solvent with isopropyl alcohol (IPA), the gel was soaked again in isopropyl alcohol (IPA) at 60 °C for 2 days. Subsequently, the gel was maintained in a chamber with liquid carbon dioxide at 80 °C, and 135 bar as the supercritical drying conditions. The average pore size, porosity, and specific surface area of the fabricated silica aerogels were 59.5 nm, 83.9%, 575 m 2 /g, respectively 7 . To prepare SERS-active substrates, 30 or 60 nm of silver (or gold) was deposited onto the aerogel surface by an electron beam evaporator. The two different thicknesses of the metal film were deposited to control the surface structure of the metallic layer on the aerogel substrate. Structural and optical characterization and SERS. Structures of bare and metal-coated aerogels were characterized with a scanning electron microscope and focused ion beam (SEM/FIB, Nova 200 NanoLab, FEI Company). We used UV/VIS -NIR Spectrophotometer (UV3600, Shimadzu Scientific Instruments) with an integrating sphere (MPC-3100) to obtain the total reflectance and transmittance spectra. We selected benzenethiols (BZTs) as Raman probe molecules, which were adsorbed onto bare aerogel, metal-coated aerogels, and glass. The BZT molecules formed well-ordered self-assembled monolayers (SAMs) on the gold (silver) surfaces through strong S-Au (S-Ag) bonds both in the vapor phase and in liquid environments [25][26][27][28][29] . Moreover, these molecules can be modified by a various specific functional groups such as 4-MBA or 4-MPBA to promote the binding of target analytes for pH detection or glucose sensing 30,31 . A drop of BZT on a petri dish and the SERS substrates were left in a desiccator. A mechanical pump was used to evacuate the desiccator and produce a BZT-vapor saturated environment. The SERS substrate was left in the desiccator overnight for the BZT to adsorb to the surface of the substrate. Benzenethiol formed a monolayer on the surfaces with a surface density of 0.45 nmol/cm 2 32 . Raman spectra were collected with a Lab Ram ARAMIS Raman spectrometer (Horiba Jobin Yvon). A He-Ne laser (633 nm) was used as a light source for excitation. Result and Discussion Ultralow refractive index of aerogel. We experimentally measured the refractive index of bulk aerogels with the use of Snell's law. A schematic diagram is shown in Fig. 2a. A CCD camera was aligned with a He-Ne laser beam (633 nm) propagating in a free-space. We then placed the aerogel sample into the beam path of the laser. The lateral shift (y) of the laser beam, after passing through the aerogel block, with uniform thickness (d), was measured by the CCD camera (see Fig. 2b). We used Snell's law and trigonometric relationships to determine the aerogel's refractive index to be 1.08 33 . To confirm the measured index value by another experimental method, the laser beam was coupled from air into the facet of a 1.8 mm-thick aerogel block with a different angle of incidence (see Fig. 2c). When the angle of incidence was greater than 67.1°, total internal reflection occurred at the aerogel/air interface, indicating that the refractive index of the aerogel was 1.08. Geometrical structure of aerogel surface. Figure 3a shows a digital camera image of two large silica aerogels. The as-prepared aerogel had a highly mesoporous structure owing to the cross-linked silica chain Fig. 3c,d,g,h). These metallic nanogaps could strongly concentrate optical fields and thus behaved as Raman-active hot spots, as illustrated in Fig. 1a. The surface morphology of the metallic thin film varied depending on the thickness of the deposited metal. The 30 nm metal-coated aerogels (Fig. 3e,i) exhibited a higher gap density than that of the 60 nm metal-coated aerogels (Fig. 3f,j). Also, 30 nm thick metal-coated aerogels showed higher surface roughness than 60 nm metal-coated aerogels. AFM images with root mean squared roughness (R q ) are shown in Supplementary Fig. S3. In contrast, the metal layers deposited on glass substrates are planar surface without nanogaps ( Supplementary Fig. S1). And we obtained the cross-sectional SEM images by FIB-SEM system, as shown in Fig. 3e,f,i,j. The images clearly show the thin metal film (false-colored with yellow), nanogaps, and silica chain networks. To avoid the damage of thin metal film morphology on the aerogel during the FIB/SEM measurement, we carried out both E-beam and Ion beam assisted Pt deposition in sequence. The E-beam deposited Pt atoms penetrate into the nanopores of aerogel below the metal film, resulting in the disappearance of porosity of the aerogel substrate. However, from the Ion beam assisted Pt deposition image without E-beam assisted method (see Supplementary Fig. S2), we can clearly see the nanopores of aerogel below the thin metal film. FDTD simulation. We used 3D finite-difference time-domain (FDTD) methods to simulate the optical properties of the metal-coated aerogel structures. To investigate the effects of the substrate index on Raman enhancement, we calculated the electric field distributions depending on the substrate index without changing the metallic nanogap structure: aerogel (n = 1.08), glass (n = 1.52), and silicon (n = 3.88) 1,4,34 . In reality, the metal layer deposited on glass or silicon substrate is planar thin film without nanogaps, which are formed only on nanoporous aerogel substrate. For SERS effect, to investigate the effect of lower refractive index substrate, while excluding the different metal layer's morphology effect, we assumed the virtual cases that the exactly same morphology of metallic nanogap structure is deposited on the substrates with varying refractive index, such as aerogel, glass, silicon. The calculation was performed with commercial FDTD software (Lumerical Inc., Canada). We used the top-view SEM images of the metal-coated aerogel to create a model structure (Fig. 4a,e). The images were converted into binary images and then imported into the FDTD software 35 . Figure 4b-d,f,g,h) represent the calculated electric field profile of the 30 nm (60 nm) Ag-coated aerogels, glass, and silicon, respectively, when the 633 nm plane wave was illuminated at normal incidence. For the light excitations by 532 nm and 785 nm plane waves, the calculated electric field profiles are shown in Figs S4 respectively. These results show that, for the field enhancement, the metallic nanogap geometry formed by aerogel substrate is the main factor; however, the ultralow index substrate also makes a part of contribution. The SERS enhancement can be approximated by: 2 2 where = E E E E / ( loc inc Loc : local electric field amplitude at the molecule position, E inc : incident field amplitude), ω is the incident lights frequency, and ω′ is the Stokes-shifted frequency. With the assumption that ω ω ≅ ′ E( ) E( ) , the enhancement factor follows the so-called, | | EF E 4 law [36][37][38][39] . Recently developed molecular cavity optomechanics provides an understanding of the strongly coupled vibrational modes of nuclei (phonon) and (cavity) surface plasmon, predicting similar | | E 4 -law-like Raman enhancement under certain conditions 40 Optical properties. We characterized the optical properties of the bare and metal-coated aerogel in the wavelength range between 300 and 1000 nm. Because the incident light was scattered when it passed through the aerogel, we used a UV-VIS-NIR spectrophotometer fitted with an integrating sphere to measure the total (diffuse + specular) reflectance and transmittance. Because of the ultralow refractive index of the aerogel, which was close to that of air, the Fresnel reflection at the air-aerogel interface was considerably suppressed. As shown in Fig. 5a,b, the bare aerogel exhibited very low reflectance and high transmittance in the visible region. Conversely, the silver-coated aerogels showed increased reflectance and reduced transmittance as the coating thickness increased. Figure 5c,f show the absorption spectra of the aerogel and silver-coated aerogels obtained from 100-T-R (%). The absorption of the 30 nm silver-coated aerogel was enhanced over a broad wavelength range by the randomly distributed metallic nanogaps 42,43 . The nanogaps generated randomly shaped hot spots with a high density where the electromagnetic field was strongly confined by localized surface plasmon resonance (LSPR) 44 . Interestingly, Figure 5. Measured optical (a) reflectance, and (b) transmittance spectra of the bare aerogel (black), 30 nm silver-coated aerogel (red), and 60 nm silver-coated aerogel (blue). (c) Absorption spectra of the bare (black), 30 nm (blue) and 60 nm (red) silver-coated aerogel calculated from 1-R-T. Measured optical (d) reflectance, and (e) transmittance spectra of the bare glass (black), 30 nm silver-coated glass (red), and 60 nm silver-coated glass (blue). (f) Absorption spectra of the bare (black), 30 nm (blue) and 60 nm (red) silver-coated glass calculated from 1-R-T. Black dotted line indicates excitation wavelength (633 nm). the aerogel with the 30 nm thick silver coating showed stronger absorption than that of the aerogels with the 60 nm thick silver coating. We attribute this strong absorption to the formation of more plasmonic nanogaps on the surface of the 30 nm silver-coated aerogel compared with the 60 nm silver-coated substrate, as shown in the top-view SEM images (Fig. 3c,d). However, the 60 nm thick Ag-coated aerogel showed similar absorption with bare aerogel in visible wavelength because of the lack of metal nanogaps. The absorption at λ < 400 nm was mainly induced from the interband transition of Ag, not from the LSPR 45 . For reference, we also measured the optical properties of the 30-and 60 nm-thick silver coatings on planar glass, as shown in Fig. 5d-f. In these cases, no absorption enhancement was observed from hot spots at nanogaps. Similar to silver-coated aerogels, the absorption of the 30 nm gold-coated aerogel was enhanced by LSPR over a broad wavelength range at λ > 500 nm. Besides, the enhanced absorption at λ < 500 nm is mainly caused by the interband absorption of gold (See Supplementary Fig. S6) 45 . SERS measurements and enhancement factor. When the light was incident on the metal surface under specific conditions, the wave may excite LSPR on the surface. This effect leads to strong amplification of the light in the near field of the surface, resulting in a large enhancement of the Raman scattering 46 . To investigate the effect of the LSPR on the SERS signal, Raman spectra of BZT were obtained with a Raman spectrometer operating with He-Ne (633 nm) laser excitation. The excitation wavelength of 633 nm was arbitrarily chosen among the commonly used wavelengths (such as 532 nm, 633 nm and 785 nm) for Raman spectroscopy because metal-coated aerogel was expected to show SERS enhancement over the broadband wavelengths regime due to the broadband LSPR excitation. As described in Fig. 6a-d, the BZT on the metal-coated aerogel exhibited strongly amplified Raman features. The enhanced Raman signals come from the strong electric field localized by metal nanogaps and electric field moved towards the sensing area caused by the ultralow index of the substrate. The BZT on the planar metal-coated glass shows indistinguishable Raman features. However, if we increase the excitation laser power into four times and the data integration time into ten times, the Raman signals from the monolayer benzenethiol on silver-coated glasses was observable, as shown in Supplementary Fig. S7. Figure 6e represents the Raman signal of benzenethiol on the bare aerogel and bare glass without the metal coating; however, the peaks were not visible. As a reference, the Raman signal from a pure (≥99%) liquid benzenethiol is also presented in Fig. 6a. To evaluate the enhancement factors of the metal-coated aerogel substrates, we performed a normalization procedure, as shown in Fig. 6f. We assumed that Raman scattering occurred only in the focal volume of the laser beam. The total number of Raman active molecules in the liquid benzenethiol could be determined from the density (1.077 g/mL) and molar mass (110.17 g/mol) of the liquid benzenethiol in the cuvette and the volume of the focal region, approximated as a cylindrical shape, and its geometrical dimensions extracted from the Gaussian beam parameters 47 . The height of the cylinder-shaped focal region was twice the confocal parameter 2Z R and the radius became the beam width at Z R , which was 2 times larger than the beam waist ( w 2 0 ). Therefore, the number of excited molecules could be calculated as the product of the laser excitation volume, the Avogadro number, the density of the liquid benzenethiol divided by the molar mass of the benzenethiol molecule. The number of excited benzenethiol molecules in the monolayer on the SERS substrate could be calculated in a similar way. By multiplying the area of the laser spot, the surface density of the benzenethiol monolayer and the surface factor by the geometry of the SERS substrate, we calculated the number of the excited molecules on the SERS substrate. Through the aforementioned normalization, we calculated the enhancement factor for peaks at 999, 1,024, 1,091, and 1,580 cm −1 , as shown in Table 1 48,49 . The enhancement factors (EF) were calculated by the following formula 38 . SERS lBZT SERS lBZT where I SERS and I lBZT are the Raman intensities of the SERS substrate and liquid benzenethiol, respectively. N SERS and N lBZT are the number of excited molecules on the SERS substrate and in the liquid benzenethiol, respectively. For the samples on the aerogel substrates coated with 30 and 60 nm Ag films, the EFs were experimentally observed as high as . × . × and 7 9 10 3 6 10 7 7 , respectively. The larger EF and stronger SERS signal of the 30 nm Ag-coated aerogel were attributed to the higher density of hotspots per unit area compared to the 60 nm Ag-coated aerogel. 30 nm gold-coated aerogel provides experimentally measured SERS enhancement factor of . × 1 5 10 7 , which is lower than silver-coated aerogels. Generally, silver nanostructure shows stronger near field enhancement than gold nanostructures because gold have larger LSPR damping due to the interband transition in the visible wavelength region 50,51 . For 30 nm Ag-coated aerogels, the experimental EF and theoretical maximum EF values from FDTD showed the same order of magnitude of 10 7 .
4,650.8
2018-10-11T00:00:00.000
[ "Materials Science", "Chemistry", "Physics" ]
An empirical depreciation model for agricultural tractors in Spain This work analyses the market value of second hand agricultural tractors in Spain for the period 1999-2002, with the aims of obtaining the most appropriate valuation models (through the use of ordinary least squares regression) and proposing an empirical model that estimates the true depreciation of these vehicles. Differences in tractor depreciation were studied in terms of the three horsepower groups normally employed (< 60, 60-90, > 90 hp), as well as in terms of a new power classification (< 80, 80-133 and > 133 hp) that appears to better reflect the influence of horsepower on the change in market value. The results show tractor depreciation to be exponential, with larger, more powerful tractors depreciating more quickly than smaller machines. Additional key words: agricultural machinery, Box-Cox models, power groups, remaining value. Introduction The mechanization of agriculture has led to remarkable advances in the competitiveness of agricultural products, reducing the costs of their production and increasing the profits enjoyed by farmers.Manual labour costs have been gradually (although greatly) reduced, but machinery costs have increased, particularly those associated with fuel and lubricants, insurance, housing/ storage, maintenance, repairs and the depreciation rate.These last three components are often confused by managers, who frequently, yet erroneously, understand them to be synonymous.Maintenance and repair cost are relatively easy to obtain and a number of studies in this area have been published [e.g., see Frank (2003), who studied these costs in combine harvesters in Argentina].However, the depreciation in the value of farm machinery (the consequence of its use and the passage of time) is without doubt more difficult to understand.It is now common for business managers to use theoretical models to try to estimate this.However, rather than rely on such theory-based models, it would be better to take into account the change in the real market price of these vehicles. Theoretical models of depreciation can be classified into three main groups depending on the weight assigned to each year of usage: linear, increasing or decreasing.In Spain, only certain methods of determining the theoretical depreciation are officially accepted with a view to fiscal effects, but these models cannot guarantee a true reflection of the depreciation suffered. In the USA, several authors have studied the depreciation of the value of farm machinery using the «present value method» (Audsley and Wheeler, 1978;Musser et al., 1986).Other American authors (see below) Palabras clave adicionales: grupos de potencias, maquinaria agrícola, modelos Box-Cox, valor residual.have sought to estimate the value of used machinery and thus determine its true depreciation.This has involved the use of economic regression methods, which are well developed in the USA due to the large amount of information available.For example, Peacock and Brake (1970), McNeill (1979), Leatham and Baker (1981), Reid and Bradford (1983), Perry et al. (1990), Hansen and Lee (1991), Cross and Perry (1995), and Unterschultz and Mumey (1996) used information from manufactures' catalogues and concessionaires. All of these authors used the regression method to estimate the remaining value of machinery, taking into account variables such as age and technical characteristics. portant studies.The first concentrated mainly on tractors and involved the use of linear models, but with time they evolved and expanded to other types of machinery and the use of more sophisticated, nonlinear models.Cooper (1994) conducted similar studies in England, using econometric models.In Spain, such studies have only been performed by Arias (2001) and Guadalajara (2002), and both were based on information obtained from a second hand marketing publication «Marketing Ocasión de Maquinaria Agrícola (MOMA)».The first of these studies dealt with the depreciation in the value of tractors using data corresponding to the last six months of 1997.The main conclusions were that depreciation was most intense during the first year, and worse for four-wheel rather than two-wheel drive tractors.The second study estimated depreciation in tractors in Spain and Italy during the years 2000 and 2001.In both countries the power, traction, and age of the machines were the most influential variables.It was also shown that the life of a tractor in Spain is longer -in fact almost double that of a tractor in Italy. Two promotions/legislations dating from 2005 have lent support towards making use of the real depreciation in the value of agricultural machinery in Spain: the Plan Renove promoted by the Spanish Ministry of Agriculture, Fisheries and Food (MAPA), and the introduction of the International Accounting Standards (IAS).The Plan Renove provides a series of subsidies for the renovation of Spain's tractors; this is managed by the Autonomous Regions of Spain and supported by the Asociación Nacional del Sector de Maquinaria Agrícola y Tractores (ANSEMAT).The IAS system was introduced to better reflect the true market value of farm equipment etc.The value set is meant to be the most probable market price obtainable on a theoretical day of sale.It is recommended that this value be determined by an independent expert. Information regarding the situation of agricultural machinery in Spain is provided by two official sources: the MAPA [for example in the publication «Análisis del parque de tractores agrícolas» (1996), and the Registro Oficial de Maquinaria Agrícola en España (ROMA)], and a private source, the ANSEMAT.Both sources recognize the importance of agricultural tractors in the farm machinery family, a consequence of their major presence in the sector and their rising retail price. In 2002 there were 946,053 tractors in Spain (73.57% of all agricultural machines in the country), while in 1994 there were only 789,747.The second hand tractor market in Spain is very important.Figure 1 shows the number of title changes in 2003 by machine age, and draws attention to the fact that tractors over 20 years old account for the largest number of transactions.Spain's agricultural tractor population is therefore largely obsolete.The average age of a working tractor is 16 years, and nearly one third (31.7%) are over 20 years old. The main aim of the present study was to determine the behaviour of the market value of second hand agricultural tractors in Spain, and to obtain models that estimate their value over their lifespan with respect to their horsepower.This study shows that traditional horsepower grouping appears to have no influence on second hand value; a more suitable horsepower classification is therefore proposed. Data sources The source of information used for obtaining the market price of used tractors was the MOMA catalogue.The MOMA acts as an intermediary, buying and selling tractors, and publishes lists of prices each semester.In the present study, seven catalogues were used, dating from December 1999 to December 2002. A matrix was then created with 12,570 observations and with the 42 variables shown in Table 2.The first two variables, the price the MOMA paid for a tractor, and the MOMA sale price (values homogenised to 2001 figures to avoid the effect of inflation), are the variables the proposed model hopes to explain.Two models were constructed, one to explain the MOMA purchase price, and one to explain the sale price.However, these models were very similar, and only the latter is therefore presented.The f irst four explanatory variables are of a temporal nature: the homologation year (which is supposed to coincide with the year the tractor is sold new), the appraisal year (or the year when the catalogue was published), the publication semester, and finally the age of the tractor (estimated as the difference between the appraisal and homologation years).A second group of variables refers to the mechanical characteristics of the tractors: power (hp), number of cylinders, and whether the engine has turbo capability. In a third group, the locomotive characteristics of the machine are taken into account: whether the tractor is two-wheel or four-wheel drive, and whether it has wheels or tracks.The model (standard, fruit, vineyard, articulated or rigid) is also taken into account. The fourth group of variables refers to safety and comfort (the existence of a cabin, air-conditioning, wide or thin wheels, field of vision, old or modern front, etc.). Finally the tractor manufacturer appears as a dummy or binary variable; this has also been taken into account in other studies (for references see Table 1). The number of hours of use of the tractors was a further variable employed by some authors in their models, e.g., Perry et al. (1990), but this information was not available for the present study. Methodology Ordinary least squares (OLS) regression was used to obtain the depreciation model, and cluster analysis to identify the new horsepower groups. The relationship between the absolute remaining value, V, and the explanatory variables (Table 2) was obtained with the general model shown below [1]: where b 0 , b 1 , ……, b n represent the regression coefficients of the explanatory variables. It was not possible to obtain the monetary values of tractors under four years of age; the catalogue contained no data for these years.However, using the following expression, it was possible to obtain relative monetary values for any year of usage between 4 and 29 years: [2] where k = constant, V a 1 = value of a tractor model with an age of a 1 years, and V a 2 = value of a tractor model with an age of a 2 years (a 2 > a 1 ). For multivariate techniques to be used, the data and the relationships between the variables must be normally distributed, homocedastic, and linear.Following the same method as other authors (see Table 1), Box-Cox transformations (Box and Cox, 1964) were performed for each variable (dependent and independent).This allows the use of functional forms ranging from geometrical to Cobb-Douglas forms to be obtained.All Box-Cox transformations were performed using the equation below [3]: ; Once the corresponding models were obtained, the adherence to normality, homocedasticity and linearity was checked by means of residual analysis. Cluster analysis (Peña, 2002) was then used to group elements or variables into homogeneous classes depending in the similarities between them. General depreciation model The model1 constructed is of the linear-logarithmic type; Table 3 shows the results obtained with this model. Figure 2 shows that, in the model, the requirements of linearity, normality and homocedasticity were adhered to since no clear tendency was seen in the dispersion between the predicted typified values and the typified values of the residuals. Tractor power, age, type of traction, the presence or not of air-conditioning, and the manufacturer together explained 89.8% of the value of the used tractors.Power alone explained 47.73% of the value, while power and age together explained 73.4%.The variables semester and appraisal year were not included in the proposed model; these factors did not seem to influence tractor value during the period 1999-2002.Neither were the variables number of cylinders, turbo-capability, tractor version, the presence of a cabin, etc. (see Table 2) taken into account due to their high correlation with horsepower; their inclusion would have generated an undesirable multicolinearity effect.The model allows some clear conclusions to be drawn: greater horsepower, four-wheel drive, and the presence of air-conditioning increases the price of used tractors, and age reduces it.When there is equality across these factors, the manufacturer affects the price; Avto tractors (Manufacturer 3) were the cheapest, and Fendt tractors (Manufacturer 9) the most expensive. The value that a tractor can demand over its life since its fourth year is shown by expression [6]. ; [6] Expression [6] can be used to estimate the percentage value of the tractor with respect to its value at 4 years.Figure 3 shows the change in the remaining value with respect to the value at 4 years. To attempt to determine the change in a tractor's value over its entire life, information was collected on showroom prices2 .This allowed the remaining value of a 4 year-old tractor to be determined as a percentage of its showroom price.Table 4 shows the percentages obtained (column 2) for tractors made by the seven main manufacturers.In the third column, the table shows the same for 29 year-old tractors calculated using equation [6].Thus, a 4 year-old tractor maintains 56.16% of its showroom value, and 16.78% of this when it has reached the age of 29 years.In other words, during the first four years of a tractor's life, its value depreciates by 43.84%; during the following 25 years it falls by a further 39.38%. 3,000 2,000 1,000 0 -9.5 -7.5 -5.5 -3.5 -1.5 0.5 2.5 4.5 Consequently, assuming an average value of 56.16% of the showroom price when a tractor is four years old, the original Figure 3 can be expanded to include the first four years of tractor life.Figure 4 shows the remaining value over the entire life span. Models of depreciation by power group Since horsepower was the variable that most influenced tractor value (explaining 47.73%), a model of depreciation by power group was sought (as undertaken by Perry et al., 1990;Arias, 2001;Guadalajara, 2002) in order to invest the variable age with more influence, and to identify any differences in depreciation behaviour between tractors of different horsepower.Based on the work of Arias (2001), three groups of tractors were identif ied: small (≤ 60 hp, about 28.8% of all tractors considered), medium (60-90 hp; 41.8%) and large (> 90 hp; 29.35%).Table 5 shows the coefficients used in the depreciation model in each group. The most influential variable in all groups was age, followed by power and manufacturer in the case of the smaller tractors.The type of traction did not influence the value of the small tractors, nor did the presence of air-conditioning; the smallest tractors do not have sufficient power to run air-conditioning or four-wheel drive systems. In the other two groups, the type of traction and the presence of air-conditioning were more influential on the price than the power of the machines themselves. Thus, more powerful tractors suffer greater depreciation than those of the other g roups; the coefficient of the age variable is greater.In fact, even though the mean age of tractors in the three groups was 16 years, the most common age for large tractors was 11 years, while the medium tractors had a mean age of 15 years and the small tractors a mean age of 23 years. Starting with the coeff icients for the variable «age» in each category (Table 5) and applying an expression equivalent to [2], the change in value of a tractor by power group from its fourth year is represented by: -Small tractors: [7] -Medium tractors: [8] -Large tractors: [9] See Figure 5 for a graphical representation. A proposed classification of tractors by power group; effect on depreciation The above grouping of tractors by horsepower is that most commonly used.However, in terms of tractor depreciation, this may not be the most adequate.Figure 5, for example, shows the depreciation curves of small and medium tractors to be very similar.Cluster analysis was therefore used to obtain a different power classification3 that worked better with the depreciation model.Table 6 shows the results obtained with the central values for each cluster and the number of observations.The resultant classification was: small tractors (13-79 hp), medium tractors (80-133 hp), and large tractors (134-263 hp).According to this new classification, the number of small tractors represented 62.66% of the total observations, medium tractors 31.52%, and large tractors 5.82%.Econometric models were obtained for each of these new tractor group (Table 7). In general, with the new cluster classification the model better reflected the influence of horsepower on the change in market value.In small tractors, traction became a more important variable since, under the new rating, these have more power and therefore more chances of having four-wheel drive.This classification is similar to that used by Cross and Perry (1996) (< 80, 80-150, > 150 hp) and Wu and Perry (2004) (< 80, 81-120, 121-145, > 145 hp). With respect to medium tractors, a number of variables, such as air conditioning, were no longer important.With respect to the larger tractors, the variables important in their depreciation remained the same. The coefficient of the variable age did not vary in the new group of small tractors (-0.041), but it increased in the corresponding groups of medium (-0.0569 instead of -0.0455) and large tractors (-0.0687 instead of -0.0583).Thus, the new classif ication obtained greater differences in depreciation.Analogously, using these coefficients and applying expression [2], the change in value with age for each power group is represented by: -Small tractors (P < 80 hp): [10] -Medium tractors (80 hp ≤ P ≤ 133 hp): [11] -Large tractors (P > 133 hp): [12] See Figure 6 for a graphical representation. Conclusions The following conclusions can be drawn: Depreciation of tractors in Spain 139 1. Tractors are the most commonly used agricultural machinery in Spain, both in terms of present numbers and the increase in their numbers over recent years.Following the second hand tractor market is therefore very important.Tractors over 20 years of age accounted for more than 40% of all title changes in 2003; this gives an impression of the obsolete nature of Spain's tractors. 2. The Plan Renove promoted by the Spanish government in 2005 (supported by ANSEMAT), and the introduction of the IAS system justify the need to establish methods that can more accurately determine the depreciation of the country's tractors. 3. With respect to the valuation of used agricultural machinery, the Anglo-Saxon school is more developed; in Spain, studies in this area have been scarce.This may be related to the amount of data available in each country; in the USA, information is abundant and easy to access.This justifies the use of econometric methods to determine machinery values.In Spain, however, MOMA purchase and sale price information is published only in a paper format, and only a few internet sites with information in this area exist. 4. All of the models for estimating the remaining value of tractors are of the linear-logarithmic type. 5. Based on the general model for used tractors in Spain, power alone accounts for nearly 50% of a machine's value.Power and age together explain some 73.4%; if the traction type, the existence of air-conditioning and the manufacturer are taken into account, some 90% is explained. 6.A general empirical model for calculating the depreciation of used tractors is proposed.This is a linearlogarithmic model (decreasing type) with a coefficient of -0.048 for the variable age.It is valid for use with tractors between the ages of 4 and 29 years.The change in value over the first four years is uncertain, but in general, a 4 year-old tractor keeps 56.16% of its showroom value, and 16.78% at 29 years.Further, more detailed studies analysing depreciation in the first four years of a tractor's life are required. 7. Using the new power groups obtained from cluster analysis -small tractors (< 80 hp), medium tractors (80-133 hp) and large tractors (> 133 hp)better reflects the influence of horsepower on the change in market value.In both the traditional and cluster analysis-derived groupings, tractors of larger size are those that depreciate in value most quickly. Figure 1 . Figure 1.Total number of tractor title changes in 2003 in Spain, grouped by vehicle age.Source: ANSEMAT and MAPA data. λ = 1, the variable retains its original form; when λ = 0, a logarithmic transformation is performed.Consequently, the proposed model can now be represented by expression [4]: in which the quantitative variables may be transformed into equation [5 Figure 4 . Figure 4. Remaining value by year. Figure 5 . Figure 5. Remaining value by year for different tractors sizes with respect to value in the fourth year. Figure 6 . Figure6.Remaining value by year for cluster analysis-defined tractor sizes with respect to value in the fourth year. Table 1 . Table 1 provides a brief summary of these im-Previous studies reporting equipment depreciation functions Table 2 . Variables in the used tractor database Source: Own elaboration. Table 4 . Average percentage of the remaining value compared with the new value (seven manufacturers) Table 5 . Econometric estimate of remaining value equation variables with respect to power groups Table 6 . Cluster analysis for defining the three horsepower groups Table 7 . Econometric estimate of remaining value equation variables by cluster analysis-defined horsepower groups
4,656.4
2007-06-01T00:00:00.000
[ "Economics" ]
A Murine Monoclonal Antibody With Potent Neutralization Ability Against Human Adenovirus 7 B1-type human adenoviruses (HAdVs) HAdV-3, HAdV-7, and HAdV-55 have caused epidemics in North America, Asia, and Europe. However, to date, no adenovirus vaccines or antiviral drugs have been approved for general use. In the present work, a scFv-phage immune library was constructed and mouse monoclonal antibody (MMAb) 10G12 was obtained through selection. 10G12 is specific for HAdV-7 and binds the hexon loop1 and loop2 (LP12), resulting in strong neutralization activity against HAdV-7. Additionally, it is stable in serum and buffer at various pH values. The findings provide insight into adenovirus and antibody responses and may facilitate the design and development of adenovirus vaccines and antiviral drugs. INTRODUCTION Human adenoviruses (HAdVs), non-enveloped, icosahedral, double-stranded DNA viruses spanning > 85 genotypes, are classified into seven species (A-G) (Yoshitomi et al., 2017). HAdV infection is characterized by a broad spectrum of disease symptoms in humans, including sore throat, pneumonia, fever, and acute otitis media, with most cases involving gastrointestinal symptoms that vary with infection genotype (Arnold et al., 2010;Kunz and Ottolini, 2010). Symptoms are generally mild and self-limiting in immune-competent adults, but outbreaks of acute respiratory diseases (ARDs), such as community-acquired pneumonia (CAP), can occur in newborns, school students, and military recruits (Tan et al., 2016). B1 type adenoviruses HAdV-3, HAdV-7, and HAdV-55 are responsible for most epidemics in North America, Asia, and Europe (Choi et al., 2005;Zhang et al., 2006;James et al., 2007;Selvaraju et al., 2011;Tang et al., 2011;Gopalkrishna et al., 2016). To date, no vaccines for the general population available for HAdVs, and only vaccines against HAdV types 4 and 7 have been developed for the USA military (Russell et al., 2006;Kajon et al., 2015). Additionally, no antiviral drugs or efficient antiviral therapies have been approved for treating HAdVs (Echavarría, 2008). In the present study, scFv 10G12 was screened from an scFv-phage antibody immune library and subcloned to generate an MMAb. We identified MMAb 10G12 as a potent antibody that effectively targets HAdV-7 in vitro at low concentrations by binding to hexon loop1 and loop2 (LP12). MMAb 10G12 displayed good stability in serum and phosphate buffer (PB) at different pH values. Cell Lines and Viruses HEK293F and A549 cells (ATCC, USA) were cultured in Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal bovine serum (FBS) (Excell, China). FreeStyle TM 293-F cells (Invitrogen, USA) were cultured in FreeStyle TM 293 Expression Medium (12338; Gibco, USA). Cells were incubated at 37 • C in a 5% CO 2 atmosphere. The HAdV-7 GZ6965 strain (human/CHN/GZ6965/2001) used herein was obtained as described previously (Qiu et al., 2012) and maintained in our laboratory. The HAdV-55 strain was isolated from a patient and kindly provided by Prof. Hongbin Song (Center for Disease Control and Prevention of Chinese PLA, Beijing, China). HAdV-7 and HAdV-55 were propagated in HEK293-F cells grown in DMEM containing 2% FBS. When 75-95% of cells exhibited typical cytopathic effects (CPEs) consistent with HAdV infection, the cell suspension was frozen at −80 • C and thawed three times, centrifuged at 4,000 g for 5 min, and the supernatant was inactivated and purified using standard CsCl gradient centrifugation (Wu et al., 2002). The obtained virus particles were aliquoted and stored at −80 • C. Construction and Selection of scFv-phage Antibody Immune Libraries Preparation and characterization of the scFv-phage display library was subsequently performed. Female BALB/c mice at 6-8 weeks old were immunized with inactivated HAdV-7. Pre-immune sera were collected from mouse tails and used as negative controls. A 100 µg sample of inactivated HAdV-7 emulsified in Freund's complete adjuvant (Sigma, USA) was intraperitoneally injected, followed by four boosters of the same dose at 2-week intervals. Spleens were harvested 3 days after the final booster, and total RNA was isolated from spleen cells and was reverse transcribed into cDNA (K1621, Thermo Scientific, USA). Primers for reverse-transcription were PmCGR (TGCATTTGAACTCCTTGCC) and PmCKR (CCATCAATCTTCCACTTGAC). Full-length variable light (V L ) and variable heavy (V H ) chain genes were amplified by overlay-extended PCR and the scFv fragment was cloned into phage display vector pADSCFV-S. Competent Escherichia coli HST08 Blue cells were transformed with the ligation mixture by electroporation. Transformed cells were titrated on agar plates to determine the library size, and colony PCR was performed on a selection of colonies to determine the presence of DNA inserts in the vector. Harvested cells samples harboring the final scFv antibody gene library were combined, aliquoted, and stored at −80 • C. Purified HAdV-7 (300 ng, 100 µl) in PBS was incubated in a microtiter plate well overnight at 4 • C, then blocked with 3% BSA in TBS (50 mM Tris-HCl pH 7.5, 150 mM NaCl) for 2 h at 37 • C. A 100 µl sample of phage library at 1.9 × 10 7 plaque-forming units (pfu) per ml was added and incubated for an additional 2 h at 37 • C after a washing step. After washing, wells with TBST (TBS containing 0.05% Tween-20), bound phage was eluted with 120 µl 0.1 M glycine-HCl (pH 2.2) and neutralized with 15 µl 1 M Tris-HCl (pH 9.0). After eluting, phage was amplified by infecting E. coli XL1-Blue cells, and four rounds of panning were carried out. Positively selected phages were amplified and resulting scFv was subjected to DNA sequence. MMAb Generation of scFv PCR was performed to amplify the full-length variable light (V L ) and variable heavy (V H ) chain genes of positively selected phages. PCR products were digested with restriction endonucleases Sal I and Age I, then cloned separately into pMABG1 and pMABKa vectors containing a mouse immunoglobulin constant gene. Recombinant antibodies were obtained as IgG1 molecules, regardless of their original isotype. FreeStyle 293-F cells were transfected with equal quantities of plasmids encoding heavy and light chains using a FectoPRO transfection kit (116-001, Polyplus-Transfection, French) for antibody expression. At 4 days after transfection, antibody-containing supernatants were harvested, and antibodies were purified using HiTrap MabSelect Xtra (28-4082-60, GE Healthcare, USA). Expression and Purification of Loop1 and Loop2 (LP12) and Fiber Viral DNA was extracted from A549 cells infected with HAdV-7 or HAdV-55 using DNAVzol (Vigorous, Beijing, China) following the manufacturer's instructions. Genes encoding the hexon LP12 fragment and fiber were amplified by PCR and inserted into the pTIG-TRX vector. Primers used for PCR are listed in Table 1 (7LP12, LP12 of HAdV-7; 55LP12, LP12 of HAdV-55; 7Fiber, Fiber of HAdV-7; 55Fiber, Fiber of HAdV-55). The pTIG-TRX-LP12 plasmid was transformed into E. coli BL21 (DE3) cells (TransGen, Beijing, China) for expression of His-tagged fusion protein. Transformed cells were cultured in Luria-Bertani medium containing ampicillin at 37 • C, and recombinant expression was induced with 0.6 mM isopropyl b-D-thiogalactoside (IPTG) when the absorbance at 600 nm (OD600) reached 0.4-0.6. After reducing the temperature from 37 • C to 16 • C, cells were cultured for a further 16 h. Bacteria were lysed by ultrasonic treatment, and recombinant protein was purified by Ni-agarose resin. ELISA Assay Wells of ELISA assay plates (9018, Costar, USA) were coated with 200 ng antigen and incubated overnight at 4 • C. Wells were then blocked with 200 µl of 5% (w/v) skimmed milk-PBS for 2 h at 37 • C, and 200 µl of antibody was added and incubated for 2 h at 37 • C. Plates were washed three times with PBS-Tween (0.1% v/v), and goat anti-mouse horseradish peroxidase (HRP)conjugated IgG antibody (1:5,000, v/v) was added and incubated for 1 h at 37 • C. Finally, three rounds of washing with PBS-Tween were carried out, and detection at 492 and 630 nm was performed using OPD chromogen substrate. Virus Neutralization Test For in vitro adenovirus neutralization experiments, 100 µl of A549 cells (3 × 10 5 cells/ml) were seeded in each well of 96well plates incubated overnight at 37 • C in a 5% CO 2 atmosphere. Purified 10G12 was serially diluted 2-fold from 25 to 0.1 µg/ml in DMEM, and 50 µl aliquots of each dilution were mixed with 50 µl HAdV-7 or HAdV-55 with 100TCID 50 . Anti-DENV1 (Lu et al., 2018) and anti-EGFR (CN102993305B) antibodies served as negative controls. Antibody-virus mixtures were incubated at 37 • C for 1 h, transferred to 96-well plates containing 85-95% confluent A549 cell monolayers, and cultured in DMEM without Phenol Red or serum for 72 h. Infected cells were observed under a microscope and the number of holes in cells with lesions was counted. To test the ability of MMAb to rescue HAdVs infection, 100 µl of A549 cells (3 × 10 5 cells/ml) were seeded in each well of 96-well plates and incubated overnight at 37 • C in a 5% CO 2 atmosphere. Next, 100 µl samples of HAdV-7 or HAdV-55 with 100TCID 50 were added to 96-well plates containing 85-95% confluent A549 cell monolayers and incubated at 37 • C for 1 h. Purified 10G12 was serially diluted 2-fold from 25 to 0.1 µg/ml in DMEM without Phenol Red and serum, and 100 µl aliquots of each dilution were added to 96-well plates and incubated for 1 week at 37 • C. Anti-DENV1 and anti-EGFR antibodies served as negative controls. Infected cells were observed under a microscope and the number of holes in cells with lesions was counted. MMAb Stability Analysis To test the stability of MMAb in serum, purified 10G12 was diluted in fetal bovine serum (FBS, Excell) to 25 µg/ml and incubated for 3, 7, or 10 days at 37 • C. ELISA assays were then performed to detect whether samples still efficiently recognized HAdV-7. To test the MMAb stability in PB at different pH values, purified 10G12 was diluted in PB at pH 6.0, 6.5, 7.0, 7.5, and 8.0, and incubated at 37 • C for 5 or 8 days. ELISA assays were then performed to detect whether samples still efficiently recognized HAdV-7. Western Blotting Assay Purified antigens were quantified using a NanoDrop One c instrument (Thermo, USA). A 7.5 µg sample of reduced protein was separated by SDS-PAGE and subsequently transferred to a polyvinylidene fluoride (PVDF) membrane. Primary antibody 10G12 (1 mg/mL) was diluted 1:1,000, and secondary antibody HRP-goat anti-mouse immunoglobulin G (IgG) (ZSGB-BIO) was diluted 1:5,000. Signals were detected using Western HPR Substrate Peroxide solution (Millipore). Statistical Analysis All experiments were repeated at least three times, except for the stability assay. Data are presented as means ± standard deviation (SD). Statistical significance was determined using GraphPad Prism 5.0 software. An affinity graph was plotted, and EC 50 values were determined using GraphPad Prism 5.0 software, too. The significance of differences in protective effects compared with controls was evaluated using two-tailed Student's t-tests, and p-values < 0.05 were considered statistically significant. Construction and Selection of scFv-phage Antibody Immune Libraries Female BALB/c mice at 6-8 weeks old were immunized with inactivated HAdV-7, and spleens were harvested for RNA extraction after four booster injections. Genes encoding V L and V H chains were amplified by PCR, and DNA fragments of the expected size (350 bp) were obtained. Overlay-extended PCR was performed to generate scFv DNA fragments of ∼750 bp, which were then cloned into the phage vector pADSCFV-S. The final FIGURE 1 | Identification of mouse monoclonal antibodies (MMAbs) against HAdV-7. (A) Screening of scFv-displaying phage by ELISA. After three round of panning, 11 positive clones were identified that recognized HAdV-7. (B) ELISA analysis of binding between MMAb 10G12 and various antigens. The synthetic polypeptide antigen of foot-and-mouth disease virus (FMDV) and influenza A virus H3N2 (A/swine/Colorado/1/77) served as negative controls. HAdV-55 was tested for potential cross-reactivity. Results are presented as means ± SD from three independent experiments (***p < 0.001 vs. negative controls calculated by t-tests). (C) Affinity curve of the ELISA results for binding between HAdV-7 and serially diluted MMAb 10G12. ELISA assays were performed with 200 ng of inactivated HAdV-7 per well. MMAb 10G12 was serially diluted from an initial concentration of 80 µg/mL. scFv antibody gene library consisted of 1.9 × 10 7 independent clones, with 80% correctness. In total, 11 positive clones were identified from samples after the third round of panning against HAdV-7, and their ability to interact with HAdV-7 is shown in Figure 1A. These phagemids were extracted and each insert was sequenced. The results revealed five unique full-size scFv sequences among the 11 clones (10G12, 6H9, 10D7, 8G10, 6B10, 10G4, and 1C9 shared the same sequence; 5H4 had the wrong sequence). V H and V L of 10G12, 10A4, 1B1, and 5D12 were recloned into pMABG1 or pMABKa to generate murine IgG1 molecules. Although these four antibodies were specific to HAdV-7, 10A4, 1B1, and 5D12 did not exhibit neutralizing activity (data not shown). Thus, subsequent experiments only characterized 10G12. MMAb 10G12 Is Specific for HAdV-7 To examine whether MMAb 10G12 is specific for HAdV-7, an ELISA assay was performed. Since positive clones bound to various unrelated viruses in the previous selection of the scFv-phage antibody library, two unrelated antigens (influenza virus H3N2 and the synthetic polypeptide antigen of FMDV) were included as negative controls. Additionally, since there are more than 85 HAdV genotypes, HAdV-55 was tested for potential cross-reactivity with MMAb 10G12. As shown in Figure 1B, MMAb 10G12 bound to inactivated HAdV-7, but not to inactivated influenza virus H3N2, the synthetic polypeptide antigen of FMDV, or HAdV-55. Furthermore, ELISA assay was performed to test the affinity of MMAb 10G12 for HAdV-7. Based on the absorbance at 492 nm, and affinity graph was plotted using GraphPad Prism 5.0 software ( Figure 1C). The resulting EC 50 values indicated that the affinity between MMAb and HAdV-7 was 0.14 nM. In vitro Neutralizing Activity and Therapeutic Effects of MMAb 10G12 To examine the neutralization potential of MMAb 10G12, in vitro adenovirus neutralization experiments were performed using A549 cells. Purified 10G12 was serially diluted 2-fold from 25 to 0.1 µg/ml in DMEM, and 50 µl aliquots of each dilution were mixed with 50 µl HAdV-7 or HAdV-55 with 100TCID 50 . The antibody-virus mixtures were transferred to A549 cells, and every dilution included eight replicates. After 72 h, infected cells were observed under the microscope, and wells containing surviving cells were counted. Two antibodies (anti-DENV and anti-EGFR) served as negative controls. As shown in Table 2, 50 µl aliquots of 10G12 with 0.4 µg/ml could neutralize 100% of 50 µl HAdV-7 with 100TCID 50 , and all cells at this dilution had no lesions. Even 50 µl aliquots of 10G12 with 0.2 µg/ml could neutralize 50% of 50 µl HAdV-7 with 100TCID 50 , but cells at this dilution had partial lesions. Furthermore, even 10G12 at 25 µg/ml was unable to neutralize HAdV-55 with 100TCID 50. These findings indicate that MMAb 10G12 exhibited strong neutralization activity against HAdV-7 and poor cross-reactivity with HAdV-55. Next, the ability of MMAb 10G12 to rescue HAdVs infection was investigated to explore potential therapeutic effects. HAdV-7 or HAdV-55 with 100TCID 50 were added to 96-well plates containing 85-95% confluent monolayers of A549 cells and incubated at 37 • C for 1 h. Purified 10G12 was then serially diluted 2-fold from 25 to 0.1 µg/ml, added to infected A549 cells and incubated for 1 week at 37 • C. Anti-DENV1 and anti-EGFR antibodies served as negative controls. Infected cells were observed under a microscope and the number of holes in cells with lesions was counted. As shown in Table 3, even at 1 h after infection, 3.2 µg/ml 10G12 could rescue 100% of cells infected with 100TCID 50 HAdV-7, and none of the cells at this dilution displayed lesions. Even 0.8 µg/ml 10G12 could protect 25% of cells infected with 100TCID 50 HAdV-7. Furthermore, even 25 µg/ml 10G12 was unable to rescue A549 cells infected with 100TCID 50 HAdV-55. These findings indicate that MMAb 10G12 exhibited potent therapeutic effects against HAdV-7. MMAb 10G12 Is Stable in Serum and PB at Different pH Values To test MMAb stability in serum, purified 10G12 was diluted in FBS to 25 µg/ml and incubated in 37 • C for 3, 7, and 10 days. ELISA assays were then performed to detect whether samples still efficiently recognized HAdV-7. As shown in Figure 2A, compared with day 0, the binding activity of samples diluted in FBS after 3, 7, and 10 days decreased by a statistically significant amount. However, even after 10 days of incubation at 37 • C, the binding activity was only decreased 20%. Additionally, from day 3 to 10, the binding activity did not decrease with increasing incubation duration. Differences between fetal bovine serum and the mouse monoclonal antibody may explain this decrease in binding activity. The results indicate that MMAb 10G12 was relatively stable in serum. To test the MMAb stability in PB at different pH values, purified 10G12 was diluted in PB at pH 6.0, 6.5, 7.0, 7.5, and 8.0, and incubated at 37 • C for 5 or 8 days. ELISA assays were then performed to detect whether samples still efficiently recognized HAdV-7. As shown in Figure 2B, samples diluted in PB at pH 6.0, 6.5, 7.0, 7.5, and 8.0 after 8 days of incubation still had the same binding activity as those before incubation. These results indicate that MMAb 10G12 is stable in serum and PB at different pH values. MMAb 10G12 Binds Hexon Loop1 and Loop2 Hexon is an important antigen of neutralizing antibodies, and fiber also has a few neutralization epitopes. To determine to which epitope MMAb 10G12 binds, fragments comprising loop1 and loop2 (LP12) of hexon and fiber from HAdV-7 and HAdV-55 were amplified by PCR and subcloned into pTIG-TRX. Recombinant proteins were expressed and purified (Figure 3A), and the results of western blotting showed that MMAb 10G12 bound LP12 of HAdV-7 but not HAdV-55 or fiber ( Figure 3B). When using reduced protein samples for SDS-PAGE, 10G12 should bind the linear epitopes of 7LP12. Thus, ELISA assay plates were coated with LP12 or fiber from HAdV-7 and HAdV-55 at 200 ng per well, and the affinity for MMAb 10G12 was measured. As shown in Figure 3C, MMAb 10G12 bound LP12 of HAdV-7 but not fiber, consistent with the results of western blotting in Figure 3B. However, the ELISA result showed that 10G12 bound LP12 of HAdV-55 with weaker affinity than HAdV-7. Combined with the western blotting results in Figure 3B, this indicates that 10G12 might bind some spatial epitopes of 55LP12. However, 10G12 did not neutralize HAdV-55 ( Table 2), suggesting that these might be non-neutralizing spatial epitopes. In summary, MMAb 10G12 clearly bound the hexon LP12 region. DISCUSSION In this study, we identified antibody MMAb 10G12 that binds specifically to HAdV-7 through the hexon LP12 region. MMAb 10G12 exhibited strong neutralization activity against HAdV-7 and was stable in serum and PB at different pH values. Despite more than 85 genotypes have been identified for HAdVs, few neutralizing antibodies have been reported. Tian et al. (2018a) reported that the recombinant trimeric HAdV-11 fiber knob region is responsible for cross-neutralizing antibody responses against HAdV-11, -7, -14p1, and -55 in mice. Three neutralizing MAbs, 6A7, 3F11, and 3D8, were FIGURE 2 | Stability of 10G12 in serum and PB at different pH values. (A) ELISA analysis of the binding between HAdV-7 and MMAb 10G12 in serum. Purified 10G12 was diluted in fetal bovine serum to 25 µg/ml and incubated at 37 • C for 3, 7, and 10 days. ELISA assays were performed with 200 ng of inactivated HAdV-7 per well. (B) ELISA analysis of binding between HAdV-7 and MMAb 10G12 in PB at different pH values. Purified 10G12 was diluted in PB at pH 6.0, 6.5, 7.0, 7.5, and 8.0 and incubated at 37 • C for day 5 and 8. ELISA assays were performed with 200 ng inactivated HAdV-7 per well. Data were obtained from two separate experiments, and results are presented as means ± SD (**p < 0.01 vs. day 0 values calculated by t-test). FIGURE 3 | Binding of MMAb 10G12 to loops 1 and 2 (LP12) of hexon. (A) Reducing polyacrylamide gel electrophoresis analysis of purified antigens. A 5 µg sample of reduced Fiber and LP12 of HAdV-7 and HAdV-55 was separated by SDS-PAGE and subsequently stained with Coomassie Brilliant Blue. (B) Western blotting analysis of binding between 10G12 and various antigens. A 7.5 µg sample of reduced Fiber and LP12 of HAdV-7 and HAdV-55 was separated by SDS-PAGE and subsequently transferred to a PVDF membrane. Primary antibody 10G12 (1 mg/mL) was diluted 1:1,000, and secondary antibody HRP-goat anti-mouse IgG was diluted 1:5,000. Signals were detected using Western HPR Substrate Peroxide solution. (C) ELISA analysis of binding between 10G12 and various antigens. Data were obtained from three separate experiments, and results are presented as means ± SD (**p < 0.05 between 7LP12 and 55LP12, calculated by t-tests). MMAb 10G12 is a murine neutralizing antibody like 6A7, 3F11, and 3D8 (Tian et al., 2018a). Monoclonal antibodies of mouse origin can induce human anti-mouse antibody (HAMA) responses, which restricts the use of MMAb in humans (Hertel et al., 1990). The first MMAb, Muromonab, has been on the market since 1992, and humanized antibodies based on murine MAb are increasingly being developed (Makulska-Nowak, 1993). In future work, MMAb 10G12 may be further humanized for therapeutic use. In summary, MMAb 10G12 was specific for HAdV-7 and displayed good stability. In future work, we will explore whether MMAb 10G12 can provide protection against HAdV-7 in vivo, and if so, MMAb may be further humanized for use as a therapeutic agent. DATA AVAILABILITY STATEMENT The data used to support the findings of this study are available from the corresponding author upon request. ETHICS STATEMENT The animal study was reviewed and approved by Academy of Military Medical Sciences (AMMS; ID: SYXK2012-05). AUTHOR CONTRIBUTIONS JL and ZY conceived this study. YH, YY, and ZY carried out experiments. RW, YH, LC, and QZ performed data analysis. RW and ZY drafted, wrote, edited, and reviewed the manuscript. ZY acquired funding. JL and QZ provided resources. RW, YH, YY and ZY supervised the work.
5,112.8
2019-12-04T00:00:00.000
[ "Biology" ]
Heterotic/type II duality and non-geometric compactifications We present a new class of dualities relating non-geometric Calabi-Yau com- pactifications of type II string theory to T-fold compactifications of the heterotic string, both preserving four-dimensional N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 2 supersymmetry. The non-geometric Calabi-Yau space is a K 3 fibration over T2 with non-geometric monodromies in the duality group O(Γ4,20); this is dual to a heterotic reduction on a T4 fibration over T2 with the O(Γ4,20) monodromies now viewed as heterotic T-dualities. At a point in moduli space which is a minimum of the scalar potential, the type II compactification becomes an asymmetric Gepner model and the monodromies become automorphisms involving mirror symmetries, while the heterotic dual is an asymmetric toroidal orbifold. We generalise previous constructions to ones in which the automorphisms are not of prime order. The type II construction is perturbatively consistent, but the naive heterotic dual is not modular invariant. Modular invariance on the heterotic side is achieved by including twists in the circles dual to the winding numbers round the T2, and this in turn introduces non-perturbative phases depending on NS5-brane charge in the type II construction. JHEP10(2019)214 X appears as a discrete torsion. In [7], such automorphisms were referred to as mirrorred automorphisms; it is striking that they involve transformations of both the K3 surface and its mirror. For the twisted reduction, γ 1 , γ 2 must commute and, if there is to be a Minkowski vacuum, the intersection of their fixed loci must be non-empty. The orbifold is by transformations (γ 1 , t 1 ), (γ 2 , t 2 ) where the automorphism γ i of degree p i is combined with a shift t i on the i'th circle of the T 2 by 2π/p i (i = 1, 2). Then at a fixed point the twisted reduction reduces to a freely-acting asymmetric orbifold of the K3 × T 2 compactification, resulting in the simplest cases in the asymmetric Gepner models of [2]. In this work we will focus on the heterotic duals of these constructions, using the duality between type IIA string theory compactified on K3 and heterotic string theory compactified on T 4 [11]. Starting from the N = 4 duality in four dimensions between the type IIA string on K3 × T 2 and the heterotic string on T 6 , one can reach N = 2 dual pairs through (freely-acting) orbifolds preserving half of the supersymmetry. An important example, the construction of Ferrara, Harvey, Strominger and Vafa (FHSV) [12], relates the type IIA string compactified on the Enriques Calabi-Yau three-fold to an asymmetric toroidal orbifold of the heterotic string. More generally, it is expected that type IIA compactified on a K3-fibred CY 3 with a compatible elliptic fibration is non-perturbatively dual to a heterotic string compactification on K3 × T 2 ; see [13] for a review. Our models extend this to non-geometric dual constructions. In the six-dimensional heterotic/type IIA duality, the O(Γ 4,20 ) duality symmetry group of the type IIA string compactified on K3 is identified as the O(Γ 4,20 ) T-duality symmetry group of the heterotic string, for which Γ 4,20 is the Narain lattice. Then the duality-twisted reduction on T 2 with monodromies γ 1 , γ 2 ∈ O(Γ 4,20 ) has a heterotic realisation as a Tfold [8,14,15] with T-duality monodromies -it is a "compactification" of the heterotic string on a non-geometric space that has a fibration of T 4 CFTs over a T 2 base, with Tduality transition functions. Then the heterotic/type IIA duality maps the non-geometric Calabi-Yau mirror-fold reduction of type IIA to a T-fold reduction of the heterotic string. At a fixed point in moduli space (a point preserved by both γ 1 , γ 2 ), the heterotic T-fold reduces to an asymmetric orbifold of the heterotic string on T 6 by the transformations (γ 1 , t 1 ), (γ 2 , t 2 ) consisting of O(Γ 4,20 ) T-duality transformations on T 4 combined with shifts on T 2 . The K3 CFT at the fixed point in moduli space gives no enhanced gauge symmetry, so the corresponding T 4 heterotic compactification also has no enhanced gauge symmetry -instead it has enhanced discrete symmetry as in [16]. Rather similar constructions in spirit, but with a T 2 fiber rather than a T 4 fiber, were investigated in [17] In a recent article [16], Harvey and Moore made the following point: "It is not, a priori, obvious that heterotic/type II duality should apply to asymmetric orbifolds of the heterotic string". Indeed, while the FHSV model provides an example of such a dual to an asymmetric heterotic orbifold, no general statement appears to have been made so far. It is usually assumed that the type IIA side of such a duality should involve a Calabi-Yau three-fold. Here we show that in many cases an asymmetric orbifold of the heterotic string has a type IIA dual that is a non-geometric compactification, an orbifold of the type IIA string on K3 × T 2 by non-geometric symmetries. JHEP10(2019)214 Remarkably, while the IIA construction is a consistent construction for the perturbative type IIA string, the naive heterotic dual is not perturbatively consistent -it is not modular invariant. The perturbative heterotic construction can be modified to obtain modular invariance, but via duality this then introduces non-perturbative modifications to the type IIA construction. This complies with the adiabatic argument put forward in [18] to relate non-perturbative dualities with different amounts of supersymmetry. Such non-perturbative modifications were also seen in the FHSV model. As we shall see, for the asymmetric orbifolds discussed here, modular invariance of the heterotic models is only obtained if the shifts on the two-torus are combined with shifts on the T-dual torus, corresponding to introducing phases dependent on the string winding numbers on the twotorus. Under heterotic/type II duality, the fundamental heterotic string is mapped to a IIA NS5-brane wrapping K3, so that new heterotic phases are mapped to phases dependent on NS5 wrapping numbers in the IIA string. These NS5-brane contributions give non-perturbative modifications to the non-geometric Calabi-Yau construction. The corresponding non-perturbative corrections to the prepotential governing the vector moduli space geometry in the low-energy type IIA effective action will be analyzed in a forthcoming publication [19]. In this paper we will mostly focus on the case when the second twist γ 2 and shift t 2 are trivial. Having one non-trivial twist γ 1 is sufficient to break supersymmetry to N = 2 and give an interesting class of models. This can be thought of as a duality-twisted reduction on a circle to five dimensions with monodromy γ 1 followed by a conventional (untwisted) compactification on a further circle. This is sufficient for most of our purposes; the generalisation to two twists is straightforward and will be discussed briefly. Once the heterotic dual has been found, non-perturbative aspects of the theory can be probed. We will study the perturbative heterotic BPS states that are dual to type IIA bound states of NS5-branes (wrapping a one-cycle of the two-torus and the K3 fibre) and momentum states on the T 2 by computing the generating function for the helicity supertraces. The map between BPS states is, in a way, easier to understand than in standard cases of N = 2 heterotic/type II dualities as there are no D-branes bound-states to take into account in the present context. The plan of this paper is as follows. In section 2 we summarize the general construction of non-geometric Calabi-Yau backgrounds and the results from [7] that are needed for this paper. In section 3 we briefly review the mirrored automorphisms introduced in [7] and the way they lead to the construction of isometries of the Γ 4,20 lattice that satisfy the conditions needed for a non-geometric Calabi-Yau background; these details are not needed for the rest of the paper and can be skipped by the impatient reader. In section 4 we find the heterotic dual of the non-geometric Calabi-Yau type IIA models. Section 5 discusses some of the BPS states that arise and calculates the corresponding indices. In section 6 we present the consequences of perturbative heterotic consistency in the type IIA duality frame. Section 7 is devoted to a duality-covariant analysis of our models in four and five dimensions and of the FHSV model, allowing us to construct further dual forms of these models. Finally conclusions are presented in section 8. JHEP10(2019)214 2 Non-geometric Calabi-Yau backgrounds In this section we summarize the construction of non-geometric Calabi-Yau backgrounds from [7]. Type IIA string theory on K3 has a world-sheet formulation as a supersymmetric non-linear sigma model with K3 target space, with a B-field given by a closed 2-form; this defines a superconformal field theory. The moduli space of such non-linear sigma-models on K3 surfaces is given by [20,21] M σ ∼ = O(Γ 4,20 )\O (4,20)/O(4) × O (20) . (2.1) The isometry group of the branes charge lattice Γ 4,20 (which is also the lattice of total cohomology of the K3 surface and is given by eq. 2) The first term on the right hand side is the moduli space of Ricci-flat Kähler metrics on K3, the second is the cohomology group H 2 (K3; R) giving moduli space of flat B-fields and the last term is the volume modulus of the K3 surface. The duality group contains a geometric subgroup O(Γ 3,19 ) Z 22 generated by large diffeomorphisms of the surface in O(Γ 3,19 ) and integral shifts of the B-field, i.e. shifts of B by a 2-form representing an integral cohomology class. The remaining dualities are non-geometric in character. At certain special points in the moduli space corresponding to algebraic K3 surfaces, mirror symmetry provides an extra generator of order two which, together with the geometric subgroup generates the full duality group O(Γ 4,20 ) [21,22]. There is a continuous action of the group O(4, 20) on the moduli space and hence on the type IIA string compactified on K3. The type IIA string compactified on K3 can be further compactified on T 2 with duality twists through an ansatz in which the dependence of all fields on the toroidal coordinates y 1 , y 2 is given by a y i -dependent O(4, 20) transformation: for two commuting Lie algebra generators N 1 , N 2 . Then the monodromies are This compactification has a low energy effective action given by a gauged N = 4 supergravity theory in four dimensions [7,23] For there to be a global minimum of the potential, the intersection of the two fixed loci should be non-empty, and we will take U 1 = U 2 ≡ U inside the intersection. Then there will be a minimum of the potential be at (U ) and each monodromy γ i is an O(Γ 4,20 ) transformation conjugate to a rotation M i ∈ O(4)×O (20). Conjugating both monodromies by the same element V of O(Γ 4,20 ) then takes with a point of the fixed locus now at (V U ). In this way, one can always arrange for an element of the fixed locus to be in any given fundamental domain of the Teichmuller space. (2.8) The group O(4, 20) acts on this by . We will write this as where O(4) 0 is conjugate to the standard O(4) and O(20) 0 is conjugate to the usual O (20): JHEP10(2019)214 As a result, any automorphism at (g 0 ) must be in the O(4) × O (20) subgroup H 0 , and so the monodromies γ 1 , γ 2 must be in and we see that (2.5) is satisfied with The models that we consider should furthermore preserve eight supercharges in four dimensions. Taking the O(4) part of the rotation M to be in the condition for the reduction to preserve 8 of the 16 supersymmetries and so to give Then the twisted reduction giving an N = 2 supersymmetric Minkowski vacuum in four dimensions consists of a duality twist with monodromy γ 1 of order p 1 on the y 1 circle and a twist of γ 2 of order p 2 on the y 2 circle with At some fixed point in moduli space, the reduction becomes an orbifold by transformations (γ 1 , t 1 ), (γ 2 , t 2 ) where t i is a shift on the i'th circle of order p i t i : y i → y i + 2π/p i (2.19) and the twisted reduction reduces to a freely-acting asymmetric Z p 1 × Z p 2 orbifold of the K3 × T 2 compactification. An interesting class of models is that in which one of the monodromies is trivial, γ 2 = 1. Then we have a twisted reduction on one circle with monodromy γ 1 and a standard (untwisted) reduction on the other circle. This is sufficient to break the supersymmetry to N = 2 and gives a simple class of models that captures many of the features we want to study. We will focus on the implications of a single twist here; the second twist would be treated similarly and doesn't qualitatively change the physics, but leads to a more general mass spectrum, as discussed in [7]. For a single twist γ conjugate to a rotation M ∈ SO(4) × SO(20) by (2.5), the SO(4) rotation is characterised by two angles s 1 , s 2 and the SO(20) rotation is characterised by ten angles r 1 , . . . r 10 . For supersymmetry, the SO(4) angles must satisfy s 1 = ±s 2 [7]. For any admissible twist, γ satisfying our conditions, V γ i V −1 will also be an admissible twist for all V ∈ O(Γ 4,20 ). Changing from γ to V γ i V −1 will move a fixed point in Teichmuller space from (U ) to (V U ). As the volume of the K3 is one of the moduli, this change of representative can change the volume of the K3; all such theories related in this way are physically equivalent as they are related by dualities. JHEP10(2019)214 It is a rather non-trivial problem to find 24 × 24 integer-valued matrices representing elements of O(Γ 4,20 ) that are conjugate to SU(2) × O (20) rotations. In [7], an explicit construction was given. The starting point was finding a special algebraic K3 surface with a geometric automorphism σ of order p, and then constructing from this an automorphismσ of the K3 conformal field theory whose actionσ * on the lattice Γ 4,20 satisfied all the conditions above, and so taking γ =σ * gives the construction of our non-geometric Calabi-Yau space. These are the mirrored automorphisms and their construction is reviewed in the next section. Then the same γ will then be used in the dual heterotic construction in section 4, with the O(Γ 4,20 ) transformation realised as an element of the heterotic T-duality group. Mirrored automorphisms In this section we review the construction of mirrored automorphisms of K3 from [7], in which the K3 is chosen to be an algebraic K3 surface, i.e. a hypersurface in a weighted projective space. We start by recalling the description of the non-geometric duality symmetries as lattice isometries. The integral second cohomology of a K3 surface is isomorphic to an even self-dual lattice of signature (3,19). Up to isometries it is unique and given by where E 8 is the negative-definite lattice associated with the E 8 Dynkin diagram and U the unique even self-dual lattice of signature (1, 1). In the string theory context it is natural to consider the lattice of total cohomology, of signature (4,20), using the natural pairing between four-forms and zero-forms: This is also isomorphic to the D-branes charge lattice of the type IIA string compactified on K3. The isometry group of this lattice is O(Γ 4,20 ). The moduli space of non-linear sigma-models on K3 surfaces is given by (2.1) and the action of a duality on the moduli space corresponds to an element of the isometry group of the lattice, O(Γ 4,20 ). The fixed points of those transformations are associated with orbifold CFTs. Non-symplectic K3 automorphisms An order p non-symplectic automorphism σ p of a K3 surface X is an automorphism that acts on the holomorphic two-form ω(X) as where ζ p is a primitive p-th root of unity. 1 As such, in the string theory context, an orbifold by a non-symplectic K3 automorphism breaks all space-time supersymmetry. An example of a K3 surface admitting an order 3 non-symplectic automorphism that we will use to illustrate the general idea is provided by the hypersurface w 2 + x 3 + y 8 + z 24 = 0 ⊂ P [12,8,3,1] . (3.4) The order 3 automorphism is then simply defined by σ 3 : x → e 2iπ/3 x. The invariant sublattice of Γ 4,20 w.r.t. the action of σ 3 and its orthogonal complement are given in this case by where E 6 and A 2 are the negative-definite lattices associated with the corresponding Dynkin diagrams. The action of σ 3 on the vector space T (σ 3 ) ⊗ R corresponds to an element of the orthogonal group O(T (σ 3 )) that can be found explicitly in [7]. Mirrored K3 automorphisms For each non-symplectic K3 automorphism of the type described above, there is a corresponding mirrored automorphism that we will describe below. Supersymmetric nongeometric orbifolds of type IIA on K3 can be obtained by orbifolding by such mirrored automorphisms, even though the orbifold by the corresponding non-symplectic K3 automorphism breaks all supersymmetry. For simplicity we will restrict the discussion to the hypersurface (3.4); the general case is described in [7]. The mirror of the K3 surface (3.4), using the Greene-Plesser map [24], is given by an orbifold of a similar hypersurfacẽ w 2 +x 3 +ỹ 8 +z 24 = 0 ⊂ P [12,8,3,1] (3.6) by a discrete symmetry group G ∼ = Z 2 generated by This mirror surface also admits an order three automorphismσ 3 , which acts in a similar same way to σ 3 , withσ 3 :x → e 2iπ/3x . However, the invariant sublattice forσ 3 and its orthogonal complement in Γ 4,20 are now Comparing with the corresponding sublattices (3.5) for the original surface (3.4), we see that the two sublattices have been interchanged. JHEP10(2019)214 A crucial statement, proved in [7], is that the automorphism σ 3 of the surface (3.4) and the corresponding automorphismσ 3 of the mirror K3 surface act on orthogonal vector sub-spaces of Γ 4,20 ⊗ R: σ 3 acts non-trivially on T (σ 3 ) ⊗ R andσ 3 acts non-trivially on T (σ 3 ) ⊗ R. The diagonal action of the corresponding elements of O(T (σ 3 )) and O(T (σ 3 )) then lifts to an isometry of the full lattice Γ 4,20 , and so provides an element of O(Γ 4,20 ) of order three. In general, the action of a non-symplectic automorphism σ p of order p gives an isometry in O(T (σ p )) and the action ofσ p on the mirror surface gives an isometry in O(T (σ p )); their diagonal action is then lifted to an isometry of Γ 4,20 of order p. One associates to this isometry the action of a mirrored automorphismσ p , which can be thought of aŝ where µ is the mirror map from the original K3 to its mirror. In [7] a proof of this statement was given for all non-symplectic automorphisms of prime order p ∈ {2, 3, 5, 7, 13} using theorems proven in [25,26] but, crucially, recent mathematical results extend this picture to all allowed orders, including those that are not prime numbers [9,10]. As was discussed in [3,7], mirrored automorphisms preserve all space-time supercharges coming from the left-movers on the worldsheet, while projecting out all that come from the right-movers; this would not be possible with geometric automorphisms. Non-geometric K3 × T 2 orbifolds The fixed points of mirrored automorphisms, i.e. the K3 CFTs that are invariant under both the automorphism σ p and the automorphismσ p of a mirror pair, can be orbifolded by the automorphism. Of particular interest are certain models for which there is a duality frame in which the K3 surface has small volume in string units. These give Landau-Ginzburg (LG) orbifolds [5] which are special points in the moduli space of non-linear models on algebraic K3 surfaces at small volume; when the polynomial defining the surface is of the Fermat type, as in (3.4), they can be described as Gepner models [27] and are explicitly solvable CFTs. In this framework, the cyclic group generated by the automorphism σ p of the mirror surface is an order p subgroup of the 'quantum symmetry' of LG orbifolds and the diagonal action of (σ p ,σ p ) corresponds to an order p orbifold with a specific discrete torsion, see [3,7] for details. In this work, as in [2,7], we focus on freely acting orbifolds of the type IIA superstring compactified on K3 × T 2 . We supplement the action of the mirrored automorphismσ p on the K3 CFT with an order p translation along a one-cycle of the two-torus. The orbifold CFT gives in space-time a four-dimensional theory with N = 2 supersymmetry. Unlike compactifications on CY 3-folds, all space-time supersymmetry comes from the left-movers, signaling the non-geometric nature of the compactification. An important consequence is that, from the point of view of the low-energy 4d theory, the dilaton lies in a vector multiplet and not in a hypermultiplet. Furthermore, a large part, if not all, JHEP10(2019)214 of the K3 moduli are lifted, see [2] for a detailed analysis of the massless spectra and for one-loop partition functions of the models. The moduli spaces of these theories will be analysed in [19]. To summarize, mirrored automorphisms are non-geometric symmetries of K3 CFTs in the Landau-Ginzburg regime, and are associated with isometries of the total cohomology lattice Γ 4,20 that have no invariant sublattices. Freely-acting orbifolds constructed from the action of a mirrored automorphism on a K3 Gepner model together with a translation along the two-torus give rise to interesting N = 2 non-geometric compactifications of type IIA superstrings, providing explicit examples of the general construction outlined in section 2. The heterotic duals of these models will be found in the next section. Heterotic duals of non-geometric type II compactifications The remarkable string theory duality between the type IIA superstring theory compactified on a K3 surface and heterotic string theory compactified on a four-torus [11,28] is nonperturbative, in the sense that it maps the strongly-coupled regime of the heterotic string to the weakly-coupled limit of the type IIA string and vice versa (for a review, see [13]). From this fundamental duality one can infer numerous other connections between string theories with lower dimensionality. The duality-twisted reduction on a further T 2 of the IIA string on K3 reviewed in sections 2 and 3 should then be dual to a duality-twisted reduction on a further T 2 of the heterotic string on T 4 . At a fixed point, the orbifold of the IIA string on K3 × T 2 should then be dual to an orbifold of the heterotic string on T 6 . On the IIA side, the orbifold is by the symmetry (γ, t) where γ is a mirrored automorphism of K3 and t is a shift on T 2 . On the heterotic side, O(Γ 4,20 ) is the heterotic T-duality group, suggesting that the heterotic dual could be the asymmetric orbifold of the heterotic string on T 6 by (γ, t), where γ is a heterotic T-duality and t is the same shift on T 2 as before. However, duality and orbifolding do not necessarily commute in general, so this conjectured dualisation needs further examination. The general idea behind heterotic/type II duality in four dimensions is to apply fibrewise the duality between a K3 fibration over some base B on the type IIA side and a T 4 fibration over B on the heterotic side [29]. Our construction has a base B = T 2 and does not constrain the size of the T 2 part of the (type IIA) internal space, so that one could go to the decompactification limit of the T 2 base; moreover, the action of the automorphism (γ, t) is free, so that the quotient does not develop singularities. Under these two conditions the adiabatic argument of [18] holds and, at least in the limit of a large T 2 base which allows to perform the dualtiy 'locally' on the fibre, the heterotic dual should be the asymmetric orbifold of the heterotic string on T 6 by (γ, t). We shall show that this correspondence must be refined for small T 2 , with heterotic string winding mode contributions modifying the orbifold (this type of contribution to heterotic/type II dual pairs was anticipated already in [18]). Specifically, the automorphism (γ, t) must be supplemented by an order p shift in the T-dual circle conjugate to winding number, so that the full orbifold is by (γ, t) where now t is a shift on both the original JHEP10(2019)214 T 2 and the T-dual T 2 . This modification of the heterotic orbifold in turn implies a nonperturbative modification of the type IIA orbifold. Our construction has some similarities with the model of Ferrara, Harvey, Strominger and Vafa (FHSV) [12] which relates type IIA compactified on the Enriques Calabi-Yau 3-fold, which is a freely-acting orbifold of K3 × T 2 , to heterotic strings compactified on a freely-acting, asymmetric orbifold of T 6 . In the FHSV construction, the automorphism acts freely on the K3 surface and has fixed points on the base. In our case it is the opposite: the automorphism acts freely on the two-torus and has fixed points on the K3 surface. The comparison between these two classes of models will be further developed in sections 6 and 7. Type IIA -heterotic duality in six dimensions The type IIA string compactified on a K3 surface gives (1, 1) supergravity in six dimensions at low energies. The moduli space is given by where the first factor is the moduli space (2.1) of non-linear sigma-models on K3 and the second is the dilaton zero-mode. BPS states arise from branes wrapping cycles of the K3, and include a BPS solitonic string obtained by wrapping an NS5-brane around the K3 surface. In the duality between the Type IIA string compactified on K3 and heterotic strings compactified on T 4 , the six-dimensional dilatons and metrics are related by φ het = −φ iia and g het = exp(2φ het ) g iia . The heterotic moduli space is again (4.1), but now with the first factor being interpreted as the moduli space of Narain lattices of signature (4,20) [30,31]. The heterotic theory has a BPS solitonic string arising from an NS5-brane wrapping the T 4 ; under the duality, it is mapped to the type IIA fundamental string, while the type IIA NS5-brane wrapped on K3 maps to the heterotic fundamental string [32]. Such a correspondance will be useful in the analysis of BPS states in section 5. Construction of the heterotic dual The starting point of our construction is a point in the moduli space (2.1) that is invariant under a Z p automorphism generated by an element γ ∈ O(Γ 4,20 ). Viewing this as a type IIA construction, this is the type IIA string compactified on K3 × T 2 , where the K3 is chosen to be at a Gepner point in the K3 moduli space so that the corresponding CFT is given by a Gepner model described in [7] (e.g. the K3 at the Gepner point is (3.4) in the example given in section 2). The automorphism γ acts on the K3 as a mirrored automorphismσ. JHEP10(2019)214 In the dual heterotic interpretation, the moduli space (2.1) is viewed as the moduli space of Narain lattices of signature (4,20), acted on by the heterotic T-duality group O(Γ 4,20 ). The special point in moduli space corresponds to a Narain lattice with enhanced discrete symmetry but without enhanced non-Abelian gauge symmetry. 2 Then γ is an element of the discrete [7] is that for a mirrored automorphism none of the angles is zero, so that no directions are left invariant by the rotation. Recall from section 2 that at a point in the moduli space (2.1) given by the coset representative (g 0 ), the stabilizer is If γ corresponds in the dual type IIA picture to a mirrored automorphismσ p , a lemma from [7] shows that the matrix M representing it can be diagonalised over the complex numbers to give where ζ p is a primitive p'th root of unity and k takes all values from 1 to p − 1 satisfying gcd(k, p) = 1; put differently, γ has an eigenspace of dimension q for each primitive p'th root of unity. The dimension q is therefore equal to 24/ϕ(p), where ϕ(p) is the Euler totient function (that is the number of integers k with k ≤ p satisfying gcd(k, p) = 1). For prime orders p ∈ {2, 3, 5, 7, 13}, the eigenspaces of γ are then all 24/(p − 1)-dimensional. The type IIA construction is an orbifold of the IIA string on T 2 × K3 by the Z p symmetry generated by (γ, t) where γ is a mirrored automorphism and t is a shift of order JHEP10(2019)214 p on one of the circles. On the heterotic side, we have a Z p orbifold by the twist γ acting as a T-duality automorphism of the Narain lattice together with a shift t. In the large volume limit of the T 2 base, the shift should agree with that on the type II side, but as discussed above, we will need to consider more general shifts here. As explained e.g. in [33], any component of a shift vector along directions in which the twist acts non-trivially may be absorbed by a redefinition of the origin of the coordinates, so that without loss of generality one may consider a shift only along the directions in which γ acts trivially. In other words, decomposing the full Narain lattice of winding and momenta as we quotient by an order p twist γ in O(Γ 4,20 ), so acting non-trivially on the Γ 4,20 lattice only, together with a shift t defined by a lattice vector δ such that p δ ∈ Γ 2,2 but δ / ∈ Γ 2,2 . It is easy to check that N = 2 supersymmetry is preserved by the heterotic orbifold in this picture. Indeed, the action of γ on the world-sheet fermions is deduced from its action on the left-moving bosons, as usual, by requiring world-sheet supersymmetry to be preserved. The group SO(4) 0 acts on the left-handed Ramond ground states as a spinor, transforming as a (2, 1) + (1, 2) under Spin(4) ∼ SU(2) × SU(2). If s 1 = ±s 2 , the twist is in just one of the two SU(2) subgroups and so half the spinor degrees of freedom remain massless. can change such a 'geometric' orbifold to a non-geometric one. However, the twist in O(16) 0 may not be related to the orbifold limit of an ordinary vector bundle. This will be discussed further in section 7. Geometric and non-geometric twists Bearing this in mind, we now address the question of whether a given theory is dual to a geometric orbifold via an O(Γ 4,20 ) transformation. The answer turns out to depend strongly on the order of the orbifold. For simplicity we discuss only the cases with p prime below. In the p = 2 case, Γ 4,20 is quotiented by the involution which flips all directions of the lattice; therefore, as the twist γ may be represented here by −1, it does not mix space-time and gauge degrees of freedom, so that its restriction to the four-torus admits a standard geometric interpretation (as the same involution that gives a T 4 /Z 2 orbifold). Moreover, γ therefore remains the same under O(Γ 4,20 ) conjugation so that the resulting theory always has a geometric interpretation. 3 JHEP10(2019)214 In the p = 3 case, looking for a representative of the conjugacy class of a twist which belongs to O(4) diag × O(16) 0 is not straightforward in general. However, it is possible to show that the explicit example of an order 3 twist given in [7] may be understood as having a geometric action (this may be seen using for instance the parametrisation of O(Γ 4,20 ) of [33]). Therefore, there exist models in the p = 3 case which are equivalent to geometric theories from the heterotic point of view. The p = 5 case is more tricky, as no explicit matrix representation ofσ 5 is known by the authors. It is known that there are no supersymmetric T 4 orbifolds of order five, see e.g. [34]. A simple argument given in [35] rules out the possibility of a left-right symmetric action of the orbifold on the T 4 . Let us assume first that there is an order 5 twist γ with a geometric action, that is such that γ ∈ O(4) diag × O(16) 0 ; then, looking at the action on T 4 , N = 2 supersymmetry imposes the trace of its matrix representation to be equal to 8cos ± 2π 5 = 2( √ 5 − 1) through the |s 1 | = |s 2 | condition derived in section 2. This is incompatible with the requirement that the twist belongs to the duality group of the lattice, as this forces in particular the trace of its matrix representation to be integer-valued. Therefore, although there exist rank-4 Euclidean lattices admitting an order five symmetry, it is not possible to find a twist whose action would admit a geometric interpretation in the p = 5 case. In the p = 7 and p = 13 cases, the orbifold must be asymmetric by construction. Indeed, such a construction could only be obtained if the twist were acting as an automorphism in O(4) diag (together with an action on the gauge degrees of freedom). A result from lattice theory states that there exist euclidean lattices Λ admitting an order N symmetry if and only if rank Λ ≥ ϕ(N ), ϕ being Euler's totient function as before [36]. It is then immediate that no rank-4 lattice may admit an order-p symmetry when p = 7 or 13. The asymmetry of the construction between left-and right-movers on the T 4 is even more striking in the p = 13 case in which there are exactly two angles of absolute value 2πk/13 for each k between 1 and 6; therefore, the N = 2 supersymmetry condition s 1 = ±s 2 ensures us that there may be no angle r I equal to any of the s i 's, making the asymmetric nature of the model obvious in this case. Once again, we can therefore conclude that this construction does not admit a standard geometric interpretation in the heterotic framework either. Modular invariance and restrictions on the shift vector We now turn to the choice of shift vector in the heterotic orbifold. The twist γ is to be accompanied by a translation by a shift vector δ with pδ ∈ Γ 2,2 . The choice of group action Z p ⊂ U(1) 2 ∼ = T 2 on the type IIA side of the duality fixes the momentum associated with this shift vector, i.e. the generator of translations along the corresponding one-cycle, but not its action on states with winding number. It has long been known that in order to preserve modular invariance in orbifold models, there must be a relation between twists and shift vectors [37,38]. For later convenience, we define the charge vector ∆ ∈ Γ 2,2 \ (p Γ 2,2 ) so that the shift vector δ satisfies ∆ = p δ. As discussed in section 2, preserving N = 2 space-time supersymmetry imposes s 1 = ±s 2 . Using this, it is possible to show that the necessary and sufficient condition for modular JHEP10(2019)214 invariance of our theory is given by 4 [37,38] Furthermore, the spectrum of γ is completely fixed; indeed, equation (4.3) shows that, of the 12 angles, there are exactly 12 ϕ(p) angles equal to k p mod 1 for each value of k between 1 and p−1 such that gcd(k, p) = 1. Then, as shown in appendix A, the quantity 10 may be explicitly computed for any p, so that equation (4.5a) may be simplified to become with the product running over the distinct prime divisors q of p. We parametrise the shift vector as δ = (α i , β i ) so that ∆ = p(α i , β i ) is a lattice vector and the shift is generated by α i k i + β i w i where k i and w i are respectively the momentum and winding charges; the constraint (4.6) then becomes the condition which prevents β from vanishing, as Ψ p may never be vanishing modulo p (since any q in the above product is a divisor of p and as such is not invertible over Z p , unlike s 2 ∈ Z × p ). This translates into a non-perturbative modification of the orbifold from the type IIA perspective that will be discussed in section 6. As a side remark, we may note that whenever p is square-free, the condition (4.6) simplifies to ∆ 2 = 2s 2 mod f p ; (4.9) in particular, equation (4.9) holds for p prime. One can further simplify this condition by choosing s = 1, i.e. that the rotation in O(4) 0 corresponds to the angles 2π/p and ±2π/p (any other choice is related to this one by relabelling the orbifold sectors), so that Let us now derive the partition function of the theory in order to check explicitly the relations (4.5). As usual with conformal field theories defined on orbifolds, the partition function of the model may be expressed as a sum JHEP10(2019)214 over all allowed boundary conditions (that is, twisted or untwisted), with the contribution from the (k, l) sector defined as where g is the element of the point group whose action combines a twist by γ and a shift by t and where Tr H k stands for the trace over states in the sector twisted by g k . The various blocks of the partition function are then computed in the usual way to give 5 where the second equation is only valid for (k, l) = (0, 0). In the above equations, we have defined Θ Λ as the sum over charges lying in the lattice Λ, the sum over the lattice Γ k l as the function F k l as and the "degeneracy factor" κ k l as . (4.16) One may notice that the phase factor in the partition function block (4.13b) may be set to one by choosing an appropriate representative of the shift vector δ ∈ 1 p Γ 2,2 /Γ 2,2 , as shown in appendix A. As discussed in the introduction, the Narain lattice Γ 4,20 appearing in the (0, 0) sector lies at a point in moduli space corresponding, on the type IIA side, to a Gepner model admitting a mirrored automorphism of order p. Anticipating the following section, we emphasise here that no sum over the charge lattice Γ 4,20 appears (except for the term with (k, l) = (0, 0)), which is due to the fact that the twist γ acts non trivially on the whole lattice; hence any state with non-vanishing momentum in Γ 4,20 is projected out in the orbifolding procedure (see e.g. [37] for an extensive JHEP10(2019)214 discussion). On the type IIA side of the duality, it means that there is no lattice of BPS D-brane charges, which is easy to understand as that theory has no massless Ramond-Ramond ground states [2]. There is no non-abelian gauge group enhancement coming from Γ 4,20 on the heterotic theory as we are at a point in the moduli space corresponding to a non-singular K3 CFT. The full partition function of the heterotic orbifold CFT, given by the sum (4.11) over all sectors, is therefore modular invariant provided that This is precisely what is ensured by the equations (4.5) which are therefore interpreted, in the heterotic picture, as necessary constraints on the shift vector to obtain a (perturbatively) well-defined string vacuum. In short, a non-vanishing winding shift is imposed by the modular invariance constraints. BPS states BPS states have dual interpretations in the two dual theories, the type IIA theory on K3 × T 2 and the heterotic string on T 4 × T 2 [11] (see e.g. table 1 of [39]). In particular, winding and momenta along one-cycles of the four-torus in the heterotic theory correspond to D-branes wrapping cycles of K3 in the type IIA description of the theory, while momentum and winding states on T 2 in the heterotic picture are respectively understood as momentum states on T 2 and as NS5 branes wrapping K3×S 1 ⊂ K3×T 2 from the type IIA perspective. On the type IIA side, after the quotient by the mirrored K3 automorphism, no D-brane states remain; this is due to the fact that space-time supersymmetry is entirely carried by left-movers so that there are no massless Ramond-Ramond p-forms hence no BPS Dp-branes. In the heterotic dual, this corresponds to the fact that fundamental strings with momentum and/or winding on the 4-torus are projected out, as the automorphism used in the quotient leaves no cycle of T 4 invariant. Fundamental heterotic strings with winding around a one-cycle of the T 2 are dual to the type IIA NS5-brane wrapping the same one-cycle of the base together with the K3 fibre; on taking the quotient, this descends to what can be thought of as an NS5-brane wrapping a 'cycle' of the non-geometric Calabi-Yau background. 6 In this section, we shall study the BPS states that arise in the perturbative spectrum of the heterotic string orbifold. The type IIA duals of these states will in general be non-perturbative states carrying NS5-brane charge. Helicity supertraces In practice, a powerful tool in studying BPS states is the computation of helicity supertraces, that are protected quantities which do not change when the string coupling is increased; however, in four-dimensional theories with N = 2 supersymmetry, they can jump across walls of marginal stability in the moduli space. It can be shown (see e.g. [41] JHEP10(2019)214 for a review of helicity supertraces properties and references therein) that in N = 2 theories the only non-vanishing helicity supertrace is for any representation R of the N = 2 algebra, with J 3 the space-time helicity operator. Ω 2 vanishes for any (long) massive representation of N = 2 supersymmetry while it is unchanged under recombinations of two BPS multiplets into a long multiplet or vice versa, making it a well-defined quantity on the moduli space. In the heterotic frame, it will receive contributions from the perturbative Dabholkar-Harvey (DH) half-supersymmetric BPS states [42] that are heterotic fundamental strings in their left-moving superconformal ground state characterized by their winding and momentum charges on the torus. It is possible to extract Ω 2 from the partition function by introducing a chemical potential for the helicity; more precisely, defining with Tr H the trace over the whole Hilbert space of the theory and with J the left and right moving components of the helicity respectively, Ω 2 is generated by the function B 2 defined as where Λ stands for the lattice of electric charges of the orbifolded theory, given here by Using the results from section 4 and identities from appendix A, one may then show that the modified partition function reads where the orbifold blocks are now . (5.6b) Differentiating Z(v,v) with respect to v andv gives the index B 2 : where the primed sum k,l stands for the sum running over all values of (k, l) in Z p × Z p except (0, 0). Note that the term with (k, l) = (0, 0), i.e. the untwisted sector contribution with no quotienting group element insertion, does not contribute to the index. This illustrates once more the absence, in the orbifolded theory, of states with charges lying in the Γ 4,20 lattice. As the automorphism generating the Z p group we are quotienting by has a non-trivial action on the charge lattice, one cannot factorise the BPS index as the product of a sum over the charge lattice by a function with well-defined modular properties; however, it is still possible to split it into smaller blocks which factorise in a similar way by expressing the charge lattice as where we define Λ (k,a) as Λ (k,a) := v + kδ ∈ Γ 2,2 + kδ v, δ = a p mod 1 . Each Γ k l may then be expressed in terms of the theta functions associated with the lattices Λ (k,a) , for a between 0 and p − 1. This allows one to extract from the BPS index (5.8) the indices for each sublattice Λ (k,a) of the charge lattice, as all charges in a given Λ (k,a) transform in the same way under the whole automorphism g p . Defining as before Θ (k,a) as the whole B 2 index may be expressed as There is a subtlety to take into account here; the definition (5.3) of B 2 implies that two different charges Q and P will contribute to the same index Ω 2 if they satisfy Q 2 i = P 2 i JHEP10(2019)214 for i = L, R (Q L and Q R standing for the left and right components of the charge vector Q respectively, as before). In particular, this means that opposite charge vectors Q and −Q always contribute to the same index Ω 2 (Q); from a more physical point of view, this is a reflection of the CPT invariance of the theory which imposes that any representation of the N = 2 supersymmetry algebra must be accompanied by its CPT conjugate, which has charge −Q, to form a CPT-invariant multiplet. This means that the index Ω 2 (Q) must be computed by taking into account not only the contributions from B (k,a) 2 but also from the possible non-trivial degeneracies of states in the sum over the charge lattice. Some explicit results A straightforward check of the validity of the above indices may be obtained by evaluating the constant term in B (0,0) 2 ; indeed, this will give the index of the N = 2 supersymmetry multiplets whose charge Q has vanishing norm. At a generic point in moduli space, these are the only massless multiplets of the theory so that this gives some insight about the dimension of the Coulomb and Higgs branches. More precisely, one may show (see e.g. [41]) that the supergravity and vector multiplets each contribute +1 to B 2 while hypermultiplets each contribute −1. The classical vector moduli space was shown to be that of the STU model in [7]. With three vector multiplets and one supergravity multiplet, one expects the constant term to be 4−n H , with n H the number of hypermultiplets remaining in the orbifold theory. An explicit expansion of B (0,0) 2 in power series yields results that match the analysis of the moduli space that will be given in [19]: one finds for instance respectively 20, 10, 4, 2 and 0 massless hypermultiplets in the p = 2, 3, 5, 7 and 13 theories. 7 For each specific value of Q, it is also possible to extract the index Ω 2 (Q) from the formulae given above. Let us consider for simplicity the five-dimensional theory one would get from an orbifold of T 4 × S 1 , a decompactification limit of the case we have studied so far. The charges may then be parametrised as Q = (n, w), n and w being the momentum and winding numbers of the string respectively. Finding Ω 2 (Q) may easily be done by identifying in which sublattice Λ (k,a) Q lies and using the level-matching condition 13) where N is the level of the BPS state and α k arises from the difference i ground state energy between left-and right-movers. Setting |s i | = 1 mod p for i = 1, 2 (which amounts to choosing a generator of the cyclic group γ ∈ Z p ), α k may be explicitly computed and is for 1 ≤ k ≤ p − 1 and, of course, α 0 = −1 as usual. 7 The numbers of hypermultiplets 20, 10, 4, 2 and 0 are the quaternionic dimensions of the corresponding hypermultiplet moduli spaces. JHEP10(2019)214 Let us take a simple example, say Q = (1, 5) in the above notation, and consider also a model with p = 3; then, setting once again δ = (1/3, 1/3), one has Q, δ = 0 mod 1 which indicates that Q ∈ Λ (0,0) with the above notations. Now, as Q 2 2 = 5, Ω 2 (Q) is simply given by the coefficient of theq 5 term in the power expansion of B (0,0) 2 ; computing the first terms in this expansion gives in this specific case Ω 2 [Q = (1, 5)] = 176. One should remember here that Ω 2 does not represent a degeneracy, per se, as contributions from integer and half-integers spin multiplets are counted respectively positively and negatively. 8 This explains for instance that for other values of Q, one may find negative values of Ω 2 (e.g. Ω 2 [Q = (2, 2)] = −90). One may also consider BPS states lying in twisted sectors, which have non-integer charges in general; explicit computations show that |Ω 2 | seems to grow faster with the level N in the twisted sectors than in the untwisted one (e.g. Ω 2 Q = 1 3 , 10 3 = −236196, while the two untwisted-sector examples considered above had higher values of N but lower values of Ω 2 .). In [39], it was noted that Ω 2 is generically exponentially smaller in untwisted sectors than in twisted ones in N = 2 orbifold models; explicit expansions of the various B (k,a) 2 in powers ofq seem to confirm this statement. The asymptotic behaviour of Ω 2 (Q) is also accessible for high values of Q 2 following the procedure described in [39] which we will briefly review here. As illustrated above with a few examples, the power expansion of the B When N reaches high values, the imaginary exponential in (5.16) becomes rapidly oscillating so that the integral is dominated by the behaviour of the integrand around t ∼ . In this regime, the → 0 limit of this integral is dominated by a function of e −t ∼ 1 so that the power expansion of B This make sense as we are only considering here short multiplets, since any long multiplet has vanishing contribution to B2 as explained earlier. JHEP10(2019)214 so that expanding S·B (k,a) 2 instead in powers ofq lead to an approximation of the asymptotic behaviour of Ω 2 (Q) for large values of Q 2 2 . Here, S is the usual generator of SL(2, Z) acting on the world-sheet parameter τ as τ → −1/τ . We consider below the models with p a prime number. Explicit computations to leading order show that the asymptotic behaviour of Ω 2 in the untwisted sector is given, up to a multiplicative constant, by Here, J 1 (I 1 ) is the (modified) Bessel function of first kind. In the twisted sectors, the asymptotic behaviour of the BPS index is surprisingly identical for any order prime p of the quotienting group; it is then given in all these cases by Replacing the Bessel functions by their asymptotic expansions in equations (5.18) and (5.19) then confirms the exponentially small growth of |Ω 2 (Q)| in the untwisted sector compared to that of |Ω 2 (Q)| in the twisted ones discussed in [39]. The non-perturbative type IIA construction and duality In section 4 modular-invariance constraints on the heterotic duals of the type IIA nongeometric Calabi-Yau backgrounds were analyzed. It was found that perturbative consistency of the heterotic constructions leads to the constraint (4.8) on the shift vector for the two-torus and this implies the shift vector should have non-vanishing winding charge. In this section we will examine the consequences of this condition on the type IIA side of the duality, where it leads to a non-perturbative modification of the K3 × T 2 orbifold. For clarity of the presentation, we will restrict ourselves here to the case in which the order p of the automorphism is a prime number. Interpretation of the shift vector For both the type IIA and heterotic constructions, we have an orbifold by a twist γ ∈ O(Γ 4,20 ) and a shift t on the two-torus by a vector δ = (α i , β i ) with p δ ∈ Γ 2,2 . The momentum vector k i (i = 1, 2) on the 2-torus combines with the string winding charges w i to form a generalised momentum vector Π I = (k i , w i ) ∈ Γ 2,2 . The shift acts on a momentum state |Π = |k, w with k i , w i ∈ Z as: For a shift symmetry of order p, i.e. isomorphic to Z p , we take ∆ = p δ ∈ Γ 2,2 to be a lattice vector, with norm If the momenta k i are realised on the periodic coordinates y i ∼ y i + 2π of the 2-torus in the usual way, exp(2πiα i k i ) generates the shift If dual coordinatesỹ i conjugate to the winding charge are introduced then exp(2πiβ i w i ) generates the dual shiftỹ so the shift acts on the coordinates Y I = (y i ,ỹ i ) of the doubled torus as In both type IIA and heterotic constructions with a single twist, we can take the shift to be on a single cycle of the 2-torus, so that p δ ∈ Γ 1,1 ⊂ Γ 2,2 . In the perturbative type IIA construction, we had δ = (α 1 , 0, 0, 0), giving a shift For the heterotic string, the modular invariance constraint ∆ 2 = 2p 2 α i β i = 2 mod p obtained in section 4, eq. (4.10), implies that both α and β are non-zero. Setting α 1 = 1/p (in order to match with the perturbative type IIA construction in the large T 2 limit) one can solve this constraint with δ = (α 1 , 0, β 1 , 0), This vector generates the shifts It was to be expected that the shift in y 1 should agree in the two pictures, but we see that there is a surprising difference in that a shift in the dual coordinateỹ 1 is essential for heterotic modular invariance but there was no corresponding shift on the type IIA side in our construction. JHEP10(2019)214 In our models, the perturbatively consistent type IIA construction determines the heterotic dual in the large volume limit of the T 2 , with an orbifold by a twist γ ∈ O(Γ 4.20 ) and a shift of the coordinate y 1 of a cycle of T 2 . However, away from the decompactification limit this heterotic construction is not perturbatively consistent and must be modified by winding number shifts. Then duality implies that there should be a dual modification of the type IIA theory. This modification is non-perturbative in the type IIA theory, so does not affect the perturbative consistency of the original construction. This is in accord with the discussion of [18], where it is argued that duality does not completely determine the shift vector, and consistency conditions, such as level matching and modular invariance are needed to fix the shift vector. A similar situation was encountered in the FHSV model [12]. We will discuss further this example in subsection 6.3, and compare it with our models. In both cases, it is natural to speculate that the modifications in the type IIA theory could arise from a condition for non-perturbative consistency of the IIA string. The non-perturbative type IIA construction A convenient way of representing the modifications to the type IIA construction is as follows. The transformation t acts on a heterotic state with momentum k and winding w by The type IIA dual of the heterotic momenta k i and winding charges w i are some charges x i and z i in the Γ 6,22 lattice of type IIA compactified on K3 × T 2 . For our construction, x i remains the momentum on the torus, so k i = x i , and z i is the winding charge on the i'th circle for the solitonic string obtained by wrapping the IIA NS5-brane on K3, so that z i is the NS5-brane charge for NS5-branes wrapping K3 × S 1 , with the S 1 being the i'th circle. (For the FHSV model, x 1 and z 1 are D0-brane and D4-brane charges, as we will discuss in the next subsection.) Then for the models considered here, the heterotic transformation (6.11) becomes the type IIA transformation where k i is the momentum on the i'th circle and z i is the winding number of the solitonic string (from the NS5-brane wrapped on K3) on the i'th circle. From eq. (6.9), consistency of the heterotic perturbative limit is satisfied with α 1 = β 1 = 1/p an α 2 = β 2 = 0. Non-perturbative type IIA states with non-zero winding number for the solitonic string around the first circle of T 2 are therefore charged under the symmetry used to obtain the non-geometric Calabi-Yau background. For perturbative states with z = 0, the transformation (6.12) is of course the same as the one used in the perturbative construction with shift vector (6.7). As we have seen, the action of t on a heterotic state |k, w given by (6.11) gives a shift of the coordinates y i conjugate to k i together with a shift of the dual coordinates y i conjugate to w i . Similarly, for the IIA string, if we introduce coordinatesŷ i conjugate JHEP10(2019)214 to z i , then the action of t on a type IIA state (6.12) can be understood as a shift of the coordinates y i ,ŷ i . In general, phase rotations of this kind dependent on brane charges can be reinterpreted as shifts of suitable dual coordinates, justifying our referring to t as a shift; this will be discussed further in the next section. The above discussion implies, using heterotic/type IIA duality, that non-perturbatively consistent non-geometric Calabi-Yau backgrounds in type IIA superstring theory should be defined using a shift symmetry of the form (6.12) that includes a non-perturbative contribution. In the FHSV construction that we will discuss below, a similar type of nonperturbative modification of the shift symmetry occurs, involving D-brane charges rather than NS5-brane charges. The FHSV model The starting point for the FHSV construction [12] is a special K3 surface admitting a freely acting Z 2 involution, such that the quotient of K3 by this is an Enriques surface. This non-symplectic K3 automorphism acts on the lattice (3.2) of total K3 cohomology by interchanging two E 8 ⊕ U sublattices, acting as −1 on one sublattice U and leaving the final U invariant. 9 This is then combined with the reflection y i → −y i on the coordinates of T 2 to give a freely acting automorphism γ of K3 × T 2 . The quotient of K3 × T 2 by this gives a Calabi-Yau manifold with Euler number zero, called the Enriques Calabi-Yau 3-fold. It is a K3 fibration over P 1 with a monodromy around each of the four singularities of the base given by the Enriques involution. The action of γ on the charge lattice of IIA strings on K3 × T 2 is then to interchange the two (E 8 ⊕ U ) terms, act as −1 on U ⊕ U ⊕ U and to leave the final U invariant. To find the heterotic dual of the FHSV orbifold, Γ 6,22 is interpreted as the Narain lattice for the heterotic string compactified on T 6 , with the six sub-lattices U associated with the lattice Γ 6,6 of heterotic momenta and winding numbers on the six-torus. The action of γ on the charge lattice and moduli space then defines an action on the heterotic string theory (as we have done in section 4 for our models). In particular, the involution leaves one of the six circles invariant. However the quotient of the heterotic string theory by this involution is not modular invariant. This was remedied in [12] by supplementing the twist γ by a shift t on the circle that is invariant under the involution. The shift vector δ is such that ∆ = 2δ ∈ U (where this U is the last factor in (6.13), i.e. the invariant sub-lattice) and modular invariance requires ∆ 2 = 2, so that δ = (1/2, 1/2). Then the shift y → y + π on the circle is accompanied by a shiftỹ →ỹ + π on the dual circle. While this heterotic description looks quite similar to what happens in our models, in the type IIA duality frame the physics is rather different. The identification of the JHEP10(2019)214 heterotic and type IIA charge lattices under duality relates the heterotic momentum k and winding w on the invariant circle with the type IIA D0-brane charge x and the charge z for D4-branes wrapping K3: k = x, w = z (6.14) In the type IIA duality frame, the action of the 'shift' t is then given as a phase rotation of the form Then the IIA involution is supplemented by multiplying by the phase (6.15) depending on the D0-brane and D4-brane charges. That is, the involution (γ, t) consists of the geometric involution on K3 × T 2 (the freely acting involution of K3 combined with the reflection on T 2 ) supplemented by the phase rotation (6.15). These modifications to the Calabi-Yau compactification are visible to D-branes but not to fundamental strings, and so will not affect the perturbative type IIA string. Dualities and quotients Suppose we have a theory X on a background M with a symmetry G, together with a duality map that takes this to a theory X on a background M with a symmetry G . Then we can consider the quotient of X on M by G and the quotient of X on M by G and ask whether they are dual, i.e. whether taking the quotient commutes with the duality transformation. As discussed in [18], in general the quotients will not be dual, but in some special cases, such as those in which the adiabatic argument applies, they can be dual. As usual, without a non-perturbative formulation of string theory the duality cannot be proved, but we can seek non-trivial tests of the duality. We have already seen here a case where they are not dual. Taking X on M to be the IIA string on K3 × S 1 and taking G to be the group Z p generated by a twist of the K3 CFT (corresponding to a mirrored automorphism) and a shift in a circle coordinate, then the heterotic dual of this is not modular invariant and so not consistent. In this case we modified the heterotic symmetry G to include a winding contribution to the shift, and then made the dual modification to the action of G, involving non-perturbative NS5-brane contributions. Then a necessary condition for the quotients to be dual is that the group G is chosen so that both are perturbatively consistent. Further duals could then give further non-perturbative constraints on the group G. Here we are interested in two examples: our non-geometric Calabi-Yau construction and the FHSV model for the type IIA string, together with the conjectured heterotic duals that were discussed in section 6. Consistency of the heterotic dual required modifications of the original symmetry to include D0-and D4-brane contributions in the FHSV model and NS5-brane contributions for the non-geometric Calabi-Yau construction. However, as we shall see, this is not enough to completely determine the non-perturbative action of the symmetry in each case. In our non-geometric Calabi-Yau construction, the adiabatic argument provides strong support for the duality with the heterotic T-fold. JHEP10(2019)214 We now turn to the action of duality transformations on our model and that of FHSV to obtain new dual constructions. For this, a duality covariant viewpoint is useful. Compactifications to five dimensions We consider first compactifications to five dimensions, in both heterotic and type II duality frames. Symmetries and automorphisms. The heterotic string compactified on T 5 or type IIA string compactified on K3 × S 1 has, at generic points in the moduli space, a symmetry In the heterotic string, the BPS states carrying the charge K are heterotic five-branes wrapping T 5 . This charge can be thought of as the winding number on S 1 of the solitonic string obtained from wrapping the heterotic five-brane on T 4 . The solitonic string of the heterotic theory is dual to the fundamental string of the type IIA theory, so in the type IIA theory the singlet charge K is the winding number of fundamental type IIA strings on the S 1 in K3 × S 1 . In the IIA string theory on K3 × S 1 there is not a T-duality relating the winding number K to the momentum on S 1 , as that T-duality is not a proper symmetry of the IIA theory, but instead maps the IIA string theory on K3 × S 1 to the IIB string theory on K3 × S 1 . We are interested in automorphisms that consist of a twist γ ∈ O(Γ 5,21 ) and a shift t ∈ U(1) 27 in which t commutes with γ. One possibility is to choose the shift t to be generated by the singlet charge K, and then any γ ∈ O(Γ 5,21 ) can in principle be used. Another is to choose a sub-lattice Γ 4,20 ⊕ Γ 1,1 ⊂ Γ 5,21 so that the symmetry algebra has a subgroup and to use a twist γ ∈ O(Γ 4,20 ) from the first factor and a shift t ∈ U(1) 2 from the second factor, and these indeed commute. The automorphisms that we used in earlier sections are of this form. with corresponding sublattice Γ 5,5 ⊂ Γ 5,21 splits the heterotic degrees of freedom into degrees of freedom on T 5 described by a CFT on T 5 and the remaining right-moving modes representing the gauge degrees of freedom. This choice is not unique, and acting with the duality group O(Γ 5,21 ) will change the split into torus and gauge degrees of freedom. For a twist γ ∈ O(Γ 4,20 ) from the first factor in (7.2) and a shift t ∈ U(1) 2 from the second factor in (7.2), it is natural to choose a torus T 4 × S 1 so that the first factor of (7.2) acts on the heterotic string on T 4 and the second acts on the CFT on S 1 . Then the heterotic momentum k and winding number w on the S 1 factor are the charges generating the U(1) 2 and transforming as a doublet under O(1, 1). A shift generated by (k, w) then gives a heterotic automorphism of the kind discussed in earlier sections. It can in principle be augmented by a shift generated by the singlet charge K. The 3 charges (k, w, K) take values in a lattice Γ 1,1 ⊕ Z and transform as a 2 + 1 under O(1, 1), with k and w forming a doublet. The general construction could then involve a shift vector δ = (α, β, κ) with three components, so that δ · Π = αk + βw + κK . (7.5) This would then lead to a charge-dependent phase exp(2πiδ · Π) in the automorphism. The transformation generated by K is non-perturbative and does not affect the perturbative heterotic string. Perturbative consistency requires that (α, β) satisfy some modular invariant constraints, but places no constraint on κ. For the models considered in this article, the condition (4.8) is satisfied for αβ = 1/p 2 . As we shall see, perturbative consistency of dual forms of the theory will impose further constraints on the shift. Acting with O(Γ 5,21 ) will transform k and w into two other linear combinations of the 26 non-singlet charges, and in particular can lead to shifts that involve charges from the gauge sector. This can also be thought of as changing the original choice of split into T 5 degrees of freedom and gauge degrees of freedom to a new choice. It will also transform the twist γ to a conjugate twistγ. Alternatively, we can take the shift t to be generated by the singlet charge K, and take γ ∈ O(Γ 5,21 ). Then the shift is for some κ. This shift does not affect the perturbative heterotic string, so the perturbative construction is simply a quotient by γ ∈ O(Γ 5,21 ). In general, this will have fixed points and will result in a non-freely acting asymmetric orbifold of the heterotic string. This then restricts γ to satisfy the constraints of [37,38] for the asymmetric orbifold to be modular invariant. JHEP10(2019)214 The type IIA string perspective. As we have seen, there are many ways of choosing a split of the heterotic degrees of freedom into degrees of freedom on T 5 and gauge degrees of freedom. For any such choice of T 5 , one can choose a T 4 ⊂ T 5 in a number of ways, and for each choice one can dualize the heterotic T 4 to a type IIA K3. Thus there are many ways of choosing a K3 moduli space as a subspace of the five-dimensional moduli space ( , it is natural to choose a split such that the twist γ acts on K3 and the shift t on S 1 , and we now investigate this choice. In the type IIA string, the NS-NS 2-form is dualised to a vector field with chargeẑ. This is the charge for NS5-branes wrapped on K3 × S 1 . This can also be thought of as the winding charge for the solitonic string obtained from wrapping NS5-branes on K3 and so is dual to the heterotic string winding number. In addition, there is a momentumk and a windingŵ of the type IIA string on the extra circle. There are again 3 charges, and duality relates these to heterotic charges: k =k, w =ẑ and K =ŵ. Thus for the type IIA string, it is (k,ẑ) that form a doublet under O(1, 1) andŵ is a singlet. The general construction involves a shift vector δ = (α, β, κ) with three components, giving the heterotic shift (7.5) which is realised in the type IIA string as This shift leads to a charge-dependent phase exp(2πiδ · Π) in the automorphism. The type IIB string perspective. T-duality on the S 1 takes the IIA string on K3×S 1 to the IIB string on K3 × S 1 . If the IIB string has momentum k B and windingŵ B on the S 1 , and NS5-brane chargeẑ B for NS5-branes wrapping K3 × S 1 , these are related to the IIA string charges k,ŵ,ẑ by Then the shift with shift vector δ = (α, β, κ) acts on the type IIB string through Models. Our original type IIA construction reviewed in sections 2 and 3 had α = 0. Perturbative consistency of the heterotic dual theory required β = 0, with αβ = 1/p 2 . Perturbative consistency of the type IIA construction was achieved with no type IIA winding contributions, so this means it is consistent to take κ = 0. Then with α = β = 1/p and κ = 0 we obtain a theory which is modular invariant in both the perturbative heterotic and perturbative type IIA formulations. Taking α = β = 1/p but with κ = 0, type IIA level-matching requires κ = 0 mod p, so that the shift κŵ is by a lattice vector and so the corresponding phase is trivial. There is then no loss of generality in taking κ = 0. In this case the perturbative IIB formulation is also consistent. JHEP10(2019)214 Acting with O(Γ 5,21 ) will in general take the twist γ to a conjugate transformation that acts not just on the K3 CFT but which acts on the full K3 × S 1 CFT. Note that for p = 2 the action of the conjugate transformation on the string theory may include the world-sheet parity-reversing transformation Ω, leading to an orientifold, or (−1) F L . A factor of Ω is needed whenever the conjugate transformation reverses the space-time parity. At the same time, the O(Γ 5,21 ) transformation will rotate the charges k,ẑ to other charges for the U(1) 26 symmetry. The singlet charge K =ŵ does not change. (This can instead be viewed as changing which subsector of the theory is to be interpreted as corresponding to the K3 CFT.) For example, there is a transformation that takes k to the D0-brane charge Z 0 andẑ to the charge Z 4 for D4-branes wrapping K3. This would give a shift δ · Π = αZ 0 + βZ 4 + κŵ (7.10) which is completely non-perturbative, giving a phase rotation to any given state depending on its D0,D4 and NS5 charges. For the perturbative theory, this is simply a Z p orbifold of the type IIA string on K3 × T 2 byγ, withγ now acting non-trivially on K3 × T 2 (i.e. not just acting on K3). Perturbative consistency of this then does not depend at all on the parameters α, β, κ and only depends on the choice of twist γ. However, this is still dual to the heterotic construction, and perturbative consistency of the heterotic dual constrains α and β, as above. Similarly, the original IIA version sets κ = 0. Finally, we can instead take the shift t to be generated by the singlet charge K, and take γ ∈ O(Γ 5,21 ). Then the shift becomes δ · Π = κŵ (7.11) for the type IIA string.ŵ is a perturbative charge for the type IIA string, but it is not constrained by IIA modular invariance since the shift vector involves a winding charge but no momentum. In this case, the only constraint is that pκŵ is a lattice vector, so κ = n/p for some integer n < p. Compactifications to four dimensions We now turn to compactifications to four dimensions, which allow more general constructions. Symmetries and automorphisms. The heterotic string compactified on T 6 or type IIA string compactified on K3 × T 2 has, at generic points in the moduli space, a symmetry There is a U(1) 28 gauge symmetry associated with 28 gauge fields, and, formally, a further U(1) 28 symmetry associated with the S-dual gauge fields. In different S-duality frames, different subgroups U(1) 28 ⊂ U(1) 56 will be realised as fundamental gauge symmetries. There are 28 electric and 28 magnetic charges, transforming in the (28,2) representation under O(6, 22) × SL(2). Here we will focus on twists in O(Γ 6,22 ) and not consider twists involving S-duality. The discussion is then very similar to the 5-dimensional case above. We will consider an JHEP10(2019)214 automorphism (γ, t) consisting of a twist γ ∈ O(Γ 6,22 ) and a shift t ∈ U(1) 56 where t commutes with γ. Choosing a sub-lattice Γ 5,21 ⊕ Γ 1,1 ⊂ Γ 6,22 , the symmetry algebra of the theory has a subgroup We can then use a twist γ ∈ O(Γ 5,21 ) from the first factor and a shift t ∈ U(1) 4 from the second factor, and these indeed commute. We will also consider choosing a sub-lattice Γ 4,20 ⊕ Γ 2,2 ⊂ Γ 6,22 , selecting a subgroup of the symmetry algebra and using a twist γ ∈ O(Γ 4,20 ) from the first factor and a shift t ∈ U(1) 8 from the second factor. One class of examples arises in taking a reduction to 5 dimensions of the kind considered in the previous subsection, with a twist γ ∈ O(Γ 4,20 ) and a shift on a circle, followed by a standard reduction (no twist or shift) on a further circle; such cases have been the main focus in this paper. We can also consider reductions by (γ 1 , t 1 ) and (γ 2 , t 2 ) where γ 1 , γ 2 are two commuting twists in O(Γ 4,20 ) and t 1 , t 2 are two shifts in U(1) 8 (see [7] for an analysis of models with two twists). Note that the 8-charges for U(1) 8 transform as a (4,2) under O(2, 2) × SL(2). Using O(2, 2) ∼ SL(2) × SL(2), this is the (2, 2, 2) representation of SL(2) × SL(2) × SL(2). The heterotic string perspective. Consider first the case with a twist γ ∈ O(Γ 5,21 ) from the first factor in (7.13) and a shift t ∈ U(1) 4 from the second factor in (7.13). It is natural to choose a split so that the sub-lattice Γ 5,21 is associated with the heterotic string compactified on T 5 and the sub-lattice Γ 1,1 with a further circle compactification. The charges for the U(1) 4 symmetry are the heterotic momentum k and winding w on the extra circle, the heterotic 5-brane charge z for heterotic 5-branes wrapping T 5 and the Kaluza-Klein (KK) monopole charge q. 10 Then the general shift vector is given by δ = (α, β, λ, κ) with four components, so that δ · Π = αk + βw + λq + κz . The case with a twist γ ∈ O(Γ 4,20 ) from the first factor in (7.14) and a shift t ∈ U(1) 8 from the second factor in (7.14) is very similar. Choosing the natural split in which the sub-lattice Γ 4,20 is associated with the heterotic string compactified on T 4 and the sublattice Γ 2,2 with a further T 2 compactification, the charges for the U(1) 8 symmetry are the heterotic momenta k i and windings w i on the T 2 , the heterotic 5-brane charges z i for 10 The KK monopole charge arises from solutions of the form R×ALF ×T 5 where R is a timelike direction and ALF denotes an ALF gravitational instanton with charge q (so that for q = 1 we have self-dual Taub-NUT space). The 'extra circle' is the fibre of the ALF gravitational instanton. JHEP10(2019)214 heterotic 5-branes wrapping T 5 and the Kaluza-Klein monopole charges q i , where i = 1, 2 is a coordinate index on T 2 , which has coordinates y i . The charge z i is for a 5-brane wrapping the y i circle and the T 4 , so it is the winding number for the solitonic string from the 5-brane wrapping T 4 . The general shift is then of the form For the models of section 4, perturbative consistency requires α i β i = 1/p 2 . Taking the only non-zero coefficients to have, say, i = 1 reduces this to the previous case. The type IIA string perspective. For the case with a twist γ ∈ O(Γ 5,21 ) from the first factor in (7.13) and a shift t ∈ U(1) 4 from the second factor in (7.13), the natural choice of split has the lattice Γ 5,21 associated with the type IIA string compactified on K3 × S 1 and the lattice Γ 1,1 associated with a further compactification on a circle with coordinate y 1 . In this case the U(1) 4 charges are the momentumk and type IIA windinĝ w on the y 1 circle, the chargeẑ from an NS5-brane wrapping K3 and the y 1 circle, and the KK monopole chargeq associated with the y 1 circle. Then heterotic-type II duality relates these to heterotic charges: k =k, w =ẑ and z =ŵ, q =q. The general shift vector δ = (α, β, λ, κ) gives (7.16) in the heterotic picture and δ · Π = αk + βẑ + κŵ + λq (7.17) for type IIA. This shift again leads to a charge-dependent phase exp(2πiδ · Π) in the automorphism. Level matching of the perturbative type IIA string with α = 0 leads to κ = 0, as in the five-dimensional analysis above but places no constraints on β, λ as they correspond to non-perturbative contributions for the IIA string. Requiring perturbative consistency of both the IIA and heterotic formulations is satisfied (for the models of section 4) with α = β = 1/p, κ = 0 but puts no constraints on λ. The perturbative IIB formulation gives no further constraints. As in the five dimensional case, we can consider acting on a dual pair with a duality transformation. This will transform the charges appearing in the shift, and take the twist to a conjugate one, which for p = 2 might include factors of Ω or (−1) F L . S-duality. To find a constraint on the parameter λ, one could seek a duality that transforms q to a perturbative charge that would enter into the perturbative constraints in the dual theory. Such a duality is provided by the heterotic string S-duality. JHEP10(2019)214 while leaving the twist unchanged. If we were to demand perturbative consistency of this S-dual theory, this would be achieved only if both λ and κ are non-zero with λκ = 1/p 2 once again for the models of section 4. We then learn that perturbative consistency of the heterotic string and of the S-dual heterotic string would require all four components of the shift vector to be non-zero, and we could satisfy these requirements by taking However, in this case S-duality doesn't commute with the quotient -the strong coupling behaviour of the N = 2 supersymmetric theory arising from the quotient is not given by the strong coupling behaviour of the original N = 4 supersymmetric theory. Then the constraint λκ = 1/p 2 should not be applied to the original theory, and we can keep κ = 0, as found above. One can see directly why the adiabatic argument fails in this case. Heterotic S-duality corresponds, in type IIA variables, to a double T-duality on the two-torus, sending the torus area A to (α ) 2 /A. The adiabatic argument holds in the limit where the T 2 base is large, hence is not compatible with this duality transformation. The FHSV model revisited. The lattice Γ 5,21 is given by Consider then the automorphism γ given by interchanging two E 8 ⊕ U sublattices and acting as −1 on the remaining sublattice U ⊕ U ⊕ U . This twist γ ∈ O(Γ 5,21 ) can be associated with the first factor in (7.13) and combined with a shift t ∈ U(1) 4 from the second factor in (7.13), with shift vector δ = (α, β, λ, κ). The heterotic string dual of the FHSV model discussed in subsection 6.3 is of precisely this form. With the natural choice of split in which the sub-lattice Γ 5,21 is associated with the heterotic string compactified on T 5 and the sub-lattice Γ 1,1 with a further circle compactification, the shift is δ · Π = αk + βw + λq + κz. Perturbative consistency required both α, β to be non-zero with αβ = 1/4 [12]. In the FHSV model, γ is not taken to act on K3 × T 2 in the way we have referred to as 'natural'. In choosing the sub-lattice Γ 5,21 ⊕ Γ 1,1 ⊂ Γ 6,22 , we take the Γ 1,1 part of the charge lattice to be the one corresponding to D0-brane charge and D4-brane charge (for D4-branes wrapping K3). Then γ acts on K3 through the Enriques involution and on T 2 as a reflection. For the model of [12,43], the shift was taken to be where Z 0 is the D0-brane charge and Z 4 the charges of D4-branes wrapping K3, giving a phase rotation to any given state depending on its D0 and D4 charges. The general heterotic shift δ · Π = αk + βw + λq + κz would correspond to extending the FHSV construction must be extended by taking λ, κ non-zero, giving δ · Π = αZ 0 + βZ 4 + κZ 2 + λZ 6 (7.22) JHEP10(2019)214 where Z 2 is the charge for D2-branes wrapping T 2 and Z 6 is the charge for D6-branes wrapping K3×T 2 . Perturbative consistency of the FHSV construction places no constraint on the four parameters. However, we can instead make the following choice, giving a type IIA dual of the heterotic FHSV model which looks different from the original Enriques Calabi-Yau type IIA compactification. In choosing the sub-lattice Γ 5,21 ⊕ Γ 1,1 ⊂ Γ 6,22 , we now take Γ 5,21 to be the charge lattice for the IIA string on K3 × S 1 , so that γ acts as an involution of K3 × S 1 , with a fixed point locus, and Γ 1,1 is associated with a further circle reduction. Then the shift is δ · Π = αk + βẑ + κŵ + λq (7.23) wherek is the IIA momentum,ẑ is the NS5-brane charge.ŵ is the IIA string winding number andq is the KK monopole charge. The perturbative charges arek,ŵ. In this case, the transformation given by this twist and shift is not quite a symmetry of the IIA string on K3 × T 2 . The twist involves a reflection y → −y on the circle in K3 × S 1 and this must be combined with a world-sheet parity transformation Ω to give a symmetry. We then have an orientifold of the IIA string on K3 × T 2 by Ω combined with the shift and twist described above. For the shift δ · Π = αk this is an orientifold analysed in [18] as a dual to the FHSV model. We can consider generalising this by extending the shift to (7.23). Perturbative consistency of the IIA theory then leads to κ = 0 as before. The heterotic dual gives αβ = 1/4. As noted in [18], the adiabatic argument supports the duality between this orientifold and the heterotic dual, but does not apply to the duality between the FHSV model and its heterotic version. This type IIA orientifold is non-geometric, following the analysis of section 2; the action on the K3 CFT is in O(Γ 4,20 ) but not in O(Γ 3,19 ), and the shift corresponding to the second circle has both momentum and winding components. However through heterotic/type IIA duality it is expected to be non-perturbatively equivalent to type IIA compactified on the Enriques Calabi-Yau threefold. Non-geometric constructions The general class of construction we have been discussing consists of a quotient of a string theory background by a twist γ of order p in a duality group O(Γ n,n+16 ) for n = 4 or n = 5 together with a shift t. From the discussion in section 2, when t is a simple shift t : y → y + 2π/p of a circle coordinate y, this can be seen as a special point in the moduli space of a duality twisted reduction, with the dependence of all fields on y given by a continuous duality transformation g(y) ∈ O(n, n + 16) with monodromy γ. If the monodromy transformation acts geometrically, this constructs a bundle over a circle with fibre T 4 or T 5 or K3 or K3 × S 1 . For example, starting from type IIA compactified on acting as a K3 diffeomorphism, the duality twisted reduction can be understood as a geometric compactification of the type IIA string on a K3 bundle over S 1 . More generally, the result is non-geometric. If γ involves T-duality transformations, we have a T-fold and if it includes mirror transformations, we have a mirror-fold. JHEP10(2019)214 Our heterotic construction involved a shift vector δ = (α, β) so that the shift is generated by 2πiδ · Π = αk + βw (7.24) with αβ = 1/p 2 . As we have seen in section 6, this can be thought of as acting as a phase rotation on a state with momentum k and heterotic winding number w, or as giving a shift on the 2-dimensional doubled circle with coordinates y,ỹ with y → y + 2πα,ỹ →ỹ + 2πβ. The theory can be formulated as a double field theory with fields depending on both y andỹ. Then this is a special point in the moduli space of a duality twisted reduction in which the dependence of all fields on y,ỹ is given by a continuous duality transformation g(y,ỹ) ∈ O(n, n + 16) with monodromy γ: g(y,ỹ) −1 g(y + 2πα,ỹ + 2πβ) = γ (7.25) (This is a special case of a more general construction in which there could be different monodromies in the y andỹ directions.) For geometric monodromy in GL(n, Z) acting as a diffeomorphism of T n , this constructs a T n bundle over the doubled circle, while for a T-duality monodromy in O(Γ n,n ) this constructs a bundle of a 2n-dimensional doubled n-torus over the doubled circle, which gives the geometric realisation of a T-fold in the doubled formalism [8]. For a general monodromy in O(Γ n,n+16 ), this is a bundle with fibre the heterotic doubled torus T n,n+16 of dimension 2n + 16 over the 2-dimensional doubled circle, which can be regarded as a configuration for heterotic double field theory. Our general construction involved further charges Q I , so that the shift vector was of the form δ = (α, β, λ I ) 2πiδ · Π = αk + βw + λ I Q I (7.26) These too can be geometrised by going to an extended field theory with further coordinates u I on which the charges Q I act as translations: Then in the extended field theory, the coordinates that the fields depend on include y,ỹ, u I and the shift t acts as a translation on y,ỹ, u I , resulting in a generalised bundle over a base space (typically a torus) with coordinates y,ỹ, u I . Conclusion In this paper, we have proposed a four-dimensional N = 2 non-perturbative duality relating non-geometric Calabi-Yau compactifications of the type IIA superstring to T-fold compactifications of the heterotic superstring and have shown that this duality follows from the adiabatic argument. The non-geometric type II backgrounds were constructed in [7] as K3 fibrations over T 2 with monodromy twists associated with the action of mirrored K3 automorphisms on the K3 CFT. The K3 automorphisms are realised in the heterotic string as element of the T-duality group, and the heterotic duals are T 4 fibrations over T 2 with T-duality monodromy twists. At points in the moduli space which are fixed under the JHEP10(2019)214 action of the monodromy automorphisms, the construction reduces to an asymmetric orbifold on the heterotic side and to an asymmetric Gepner model in type IIA. At these fixed points, there is no enhanced gauge symmetry but there is enhanced discrete symmetry. These models preserve N = 2 supersymmetry in four dimensions. The automorphism acts on the lattice Γ 4,20 by an isometry in O(Γ 4,20 ) that leaves no sublattice of Γ 4,20 invariant. For the heterotic string on T 4 , all of the four left-moving and twenty right-moving chiral bosons transform. On the type II side of the duality, the D-brane charge lattice is Γ 4,20 and the fact that no sublattice is left invariant by the twist means that all BPS Dbrane states are projected out by the orbifold. This is consistent with the fact that there are no Ramond-Ramond ground states in these theories, since all space-time supersymmetry comes from the left-movers. The naive heterotic dual of the type IIA construction is not modular invariant. We found a modification of the heterotic construction that is modular invariant, and this modification led in turn to a non-perturbative modification of the type IIA model. A similar story applies to the FHSV model. For the type IIA string, the modification can be viewed as necessary for non-perturbative consistency. Although we do not have a complete non-perturbative formulation, it seems that a necessary condition for the non-perturbative consistency of a model should be that the theory is modular invariant in all possible duality frames, and in any given frame this can require non-perturbative corrections, as we have seen. Our models are perturbatively consistent in the IIA, IIB and heterotic duality frames. Acting with a duality transformation then takes us to a new perturbative theory (which can also be thought of as choosing a different modulus of the original theory as a coupling constant) and we again require consistency in this new perturbative theory. Let us explore this further. It is believed there is a non-perturbatively consistent string solution that can be treated as a perturbation theory in terms of the IIA coupling constant, the IIB coupling constant or the heterotic coupling constant. The perturbative heterotic theory is the heterotic string compactified on T 6 , while the perturbative IIA (IIB) theory is the IIA (IIB) string compactified on K3 × T 2 . The theory is believed to have an exact non-perturbative symmetry (7.12) and we are interested in taking quotients of the theory by a Z p subgroup of this. The key question is which Z p subgroups lead to consistent theories. We have seen that different restrictions arise from requiring perturbative consistency as a IIA, IIB or heterotic theory. Acting with a symmetry in (7.12) maps the Z p subgroup to a conjugate Z p subgroup embedded differently in the symmetry group and gives a new quotient. The new quotient will not in general be dual to the original one, but in an important class of cases, such as the ones studied here in which the adiabatic argument can be applied, this gives a new dual of the original construction. Perturbative consistency of each such dual theory gives further constraints. In this way, we find a set of necessary conditions for the consistency of the quotient. Knowing whether these are sufficient would require an understanding of the non-perturbative theory, but these conditions give us important information about the non-perturbative theory that it would be interesting to investigate further. The Z p symmetries we have been quotienting by are generated by a transformation (t, γ) consisting of a twist γ ∈ O(Γ 5,21 ) and a shift t ∈ U(1) 4 (or a twist γ ∈ O(Γ 4,20 ) and JHEP10(2019)214 a shift t ∈ U(1) 8 ). The adiabatic argument led us to use the same twist in each duality frame, but we found different consistency conditions on the shift in different duality frames. In our original IIA construction, the shift was a simple order-p shift of a circle coordinate y → y + 2π/p and this was sufficient for IIA modular invariance. For the heterotic dual, heterotic modular invariance required also shifting the T-dual coordinateỹ →ỹ + 2π/p, or, equivalently, the action of t on a state with momentum k and winding w on the circle was to multiply by a phase exp(2πi(k + w)/p). Transforming back to the IIA theory, the heterotic winding number w is mapped to the NS5-brane wrapping number, and so action of t on the IIA string involves a phase depending on the NS5-brane charge, giving a nonperturbative modification of the theory. The general picture involves a phase depending on four charges for a shift t ∈ U(1) 4 or eight charges for a shift t ∈ U(1) 8 , and acting with a duality transformation can change which charges they are. For example, the FHSV construction involved a phase depending on the D0-and D4-brane charges, while the dual we found had a phase depending on the type IIA momentum and NS5-brane charge. We now return to Harvey and Moore's question. There are two classes of N = 2 heterotic toroidal orbifolds with a known type II dual: quotients by symmetries that preserve a D-brane charge lattice, corresponding to IIA on Calabi-Yau three-folds, and quotients that do not preserve any charge lattice, corresponding to non-geometric compactifications based on mirrored automorphisms. In general, an orbifold of the heterotic string on T 4+n by a symmetry G is mapped to an orbifold of the type IIA string on K3 × T n by the dual G which now acts on the type IIA string; this will require that the orbifold is nonperturbatively consistent, so that in particular it is modular invariant in both the heterotic and type IIA duality frames. Consider for example a general Z p orbifold of the heterotic string on T 4 × S 1 by (γ, t), where γ ∈ O(Γ 4,20 ) acts as a heterotic T-duality and the shift gives a phase depending on the momentum and the heterotic winding number on the S 1 . This is then mapped to an orbifold of the type IIA string on K3 × S 1 by the transformation (γ, t) in which γ acts as a K3 automorphism and t gives a phase depending on the momentum on the S 1 and the NS5-brane charge for NS5-branes wrapping K3 × S 1 . In some cases the type IIA dual is a CY compactification, but in general it will lead to a nongeometric construction. It will be interesting to explore this duality further, for example for the models of [2][3][4]. JHEP10(2019)214 A Partition function computations ϑ functions. In this section, we give our conventions for ϑ functions and recall some of their modular properties that are useful in our computations. We define the Jacobi ϑ function with characteristic as where α, β ∈ R and where q is defined, as usual, by q := exp(2iπτ ). ϑ also admits the product representation [44] ϑ α β (τ |v) η(τ ) = e iπα(v+ β 2 ) q The well-known modular properties of the ϑ functions makes them functions are a powerful tool in constructing modular invariant quantities. Their behaviour under the generators of SL(2, Z) are given by
23,286.2
2019-06-05T00:00:00.000
[ "Physics" ]
DETERMINATION OF THE REGULARITIES OF THE SOIL PUNCHING PROCESS BY THE WORKING BODY WITH THE ASYMETRIC TIP S v y a t o s l a v K r a v e t s Doctor of Technical Sciences, Professor Department of Building, Road, Melioration, Agricultural Machinery and Equipment National University of Water and Environmental Engineering Soborna str., 11, Rivne, Ukraine, 33028 E-mail<EMAIL_ADDRESS>V l a d i m i r S u p o n y e v Doctor of Technical Sciences, Associate Professor* E-mail<EMAIL_ADDRESS>S e r g i i B a l e s n y i Director Design Institute of Transport Infrastructure LTD Otakara Yarosha str., 18, Kharkiv, Ukraine, 61045 E-mail<EMAIL_ADDRESS>V a l e r y S h e v c h e n k o PhD, Associate Professor* E-mail<EMAIL_ADDRESS>A l e x a n d e r Y e f y m e n k o PhD, Associate Professor* E-mail<EMAIL_ADDRESS>V i t a l i y R a g u l i n PhD, Associate Professor* E-mail<EMAIL_ADDRESS>*Department of Construction and Road-Building Machinery Kharkiv National Automobile and Highway University Yaroslava Mudrogo str., 25, Kharkiv, Ukraine, 61002 The presence of analytical dependencies describing the process of static soil puncture by a working body with a conical asymmetric tip is necessary to create installations with the ability to control the trajectory of the soil puncture. The paper considers the features of the process of interaction of an asymmetric conical tip with the ground. Analytical relationships were obtained to determine its reactions during a static puncture, the deviation of the head trajectory from a straight line, to determine the size of the soil compaction zone and the magnitude of the destructive force that acts on adjacent communications and other underground objects. It was found that with an increase in the value of the displacement of the top of the cone, for example, from its axis from 0.02 m to 0.08 m with a borehole diameter of 0.2 m, the value of soil resistance increases almost four times. The greatest resistance is achieved when piercing a hard sandy sand. It was found that with an increase in the displacement of the tip of the tip cone, the deviation of the trajectory increases. The piercing head achieves the greatest deviation from the straight trajectory of movement with a sharper cone and a greater asymmetric deviation of its top, and, for example, in hard sandy loam can be up to 0.17 m with a span of 10 m. It was found that the size of the soil destruction zone will be almost 1.8 times larger than the tip in the form of a symmetrical cone and reaches from 8 to 12 borehole diameters, depending on the type of soil. The maximum pressure on adjacent objects can reach from 0.06 MPa in hard-plastic clay to 0.09 MPa in hard sandy loam. The calculated dependences obtained for determining the power and technological parameters depending on the geometric dimensions of the asymmetric tip of the working body can be used to create installations with a controlled static puncture for use in the most common soil conditions Introduction Trenchless laying of engineering communications is actively spreading in all countries of the world. Among the existing methods for the formation of wells for the implementation of this technology, the method of static ground penetration is popular. The main disadvantage of the traditional puncture method is the lack of accuracy of movement when piercing the head in the array. To achieve the aim, it is necessary to constantly adjust the trajectory of movement. Motion control is possible through the use of a general-purpose head with an asymmetric accumulator and the impact on it of postulate-postulate-rotational motion from the power plant. This process requires a lot of effort and energy consumption. The trenchless laying of engineering communications is actively distributed in all countries of the world. Among the existing methods for the formation of wells for the implementation of this technology, the method of static ground penetration is popular. The main disadvantage of the traditional puncture method is the lack of accuracy of movement when piercing the head in the array. To achieve the goal, it is necessary to constantly adjust the trajectory of movement. Motion control is possible through the use of a general-purpose head with an asymmetric accumulator and the impact on it of postulate-rotational motion from the power plant. This process requires a lot of effort and energy consumption. Therefore, an urgent task is to improve the methods for calculating the resistance of the soil to a puncture and to ensure the control of the trajectory of movement of the soil-pubching head with an asymmetric cone. This is carried out by determining the magnitude of its deviation depending on the type of soil and its physical and mechanical authorities. Literature review and problem statement The development of urban infrastructure constantly requires the construction of new engineering networks, imposes certain restrictions on builders [1]. The laying of underground communications from the digging of trenches causes significant social problems and economic costs that arise in connection with the stoppage of traffic, complications of freight and passenger traffic. Therefore, the trenchless method of laying underground utilities is becoming more and more widely used, including when laying large-diameter pipelines, which can be performed in combination with microtuning methods [2]. The improvement of construction technologies goes in different directions. One of the ways is aimed at increasing the efficiency of the control system of the trajectory of movement of the working body in the soil by increasing the accuracy of determining it in the soil and creating devices for indicating obstacles in the path of movement in the form of cables and pipelines [3]. Effective laying of underground utilities by the trenchless method is ensured by correctly selected technologies for creating horizontal wells, which is considered in [4]. The main disadvantage of these technologies is their cost, which comes from the high cost of drilling equipment and the cost of work. Of the various methods available on the market for obtaining technological voids in soil, static soil puncture is simple and cheap [5]. But this comparison can only be valid for short track sections within 25-30 m among the methods that form straight wells. The laying of pipelines over long distances by the formation of a controlled leader puncture is considered by the authors in [6]. But the nature of the tip and the influence of yogic parameters on the process are not considered. The general laws governing the processes of ground penetration and the formation of horizontal wells are described in [7]. The process of interaction of the soil-piercing working body with the soil was considered in work [8]. But the studies carried out by the author of this work are aimed only at increasing the efficiency of the process of penetration into the soil of a working body with a symmetrical conical tip, but do not give answers about increasing the accuracy of penetration or changing the trajectory of the working body in the soil. For the same reason, the research results in [9] have a narrow distribution of results only within short distances. The process of interaction of an asymmetric tip with the ground during a controlled puncture is not considered in [10]. The destructive effect of radial soil compaction during the creation of a well by a static method on adjacent communications, road foundations and foundations of structures is considered in [11]. However, calculations of the size of the soil compaction zone around the well are not given. The need to take into account soil compaction when laying a route near underground structures and utilities is also considered in [12]. But the determination of the forces acting on nearby communications from the elastic state of the soil during its compaction with an asymmetric tip is irreducible. It is possible to reduce the destruction zone by using the pipe pushing method. Then, according to [13], soil compaction outward will not occur. However, the recommendations obtained in the work apply only to working bodies with ring-shaped tips. Investigations to determine the force of penetration of the soil with a conical tip are given in [14]. The dependence of the frontal resistance of the soil on its geometric parameters is given [15]. It should be determined that the calculated dependences obtained in [14,15] are of an empirical nature and cannot be used for the case of a soil puncture by a working body with an asymmetric tip. In contrast to [16], in [17] the establishment of soil reactions and the magnitude of deviation from the axial movement of the working body with such a tip were solved analytically, taking into account the known standard soil properties. The influence of the pipe material on its pasture, which determines the rational well trajectory, is considered in [18]. But studies on the impact on the process of puncturing the soil with a conical asymmetric tip are not given. Analytical models for calculating the technological and design parameters of equipment with an asymmetric tapered tip were not found in the literature search. The aim and objectives of research The aim of research is to establish the regularities of the process of ground penetration by a working body with an asymmetric tip and, on their basis, to develop technological and constructive theoretical calculated dependencies for installations for trenchless laying of underground communications. To achieve this aim, it is necessary to solve the following objectives: -to determine the force of soil resistance to displacement and lateral deflecting force, which act on the asymmetric conical tip; -to determine the amount of deflection of the head with an asymmetric cone as it moves in the soil mass; -to establish the size of the destruction zone around the asymmetric conical tip and determine the force of soil pressure on the adjacent underground communications. Materials and methods for investigating the process of soil puncture by a soil-piercing working body with an asymmetric conical tip The theoretical research methods carried out are based on the theories of soil mechanics and their cutting, soil puncture and traditional methods of analyzing research results. Confirmation of the reliability of the results of the obtained individual provisions of the theory of the process occurred by comparing them with practical and experimental data. 5. The results of the study of the process of puncturing the soil with a working body with an asymmetric conical tip 1. Determination of the force of soil resistance to displacement and lateral deflecting force that act on an asymmetric tapered tip The calculated scheme of the action of the soil resistance forces on the asymmetric conical tip of the soil-piercing working body is shown in Fig. 1. Design scheme for determining the action of soil resistance forces on a tip with an asymmetric cone To calculate the force of the axial force of resistance of the soil to the puncture and its dependence on the value of the displacement of the apex of the cone, which let's represent as the distance x k from its projection onto the x-axis at point C ( Fig. 1). According to the previously obtained method [17], it is first necessary to determine the regularity of the change in soil density along the height of the cone on the basis of the law of equality of the soil masses before and after its puncture. To do this, it is necessary to determine the areas of the lateral surface of the asymmetric cone from the condition of limiting its sharpening, at which the east of the soil from the walls of the cone should be provided. To do this, through its geometric dimensions, consider the tangents of the angles of deflection of the apex of the cone and tgβ 2 , which have the form: where R -radius of the base of the cone; H -height of the cone; x k -value of the displacement of the apex of the cone along the x axis. To prevent the formation of a soil core of compaction at the apex of the cone, when the soil is guaranteed to come off the forming surface, its sharpening should be at least 2β 1 ≤ 50° [5], that is: The lateral (half) area of an elementary truncated cone is: where ∂d z -elementary increase in the diameter of the truncated cone at the height ∂z. Thus, the selected area will be equal to: Let's determine the law of change of soil density by the height of the cone on the basis of the law of equality of soil masses before and after the puncture: So, Then the normal soil pressure in each cross section of the cone will be equal to: where E C s s c n = + ( )⋅ 1 ω ρ ρ -compression modulus of soil deformation; ω -natural soil moisture; С c -soil compression coefficient; ρ s -density of the solid phase of the soil (the density of the soil, provided that there are no pores in it); ρ z -variable soil density along the height of the cone is proportional to the change in the cross-sectional area of the cone; ρ n -density of the soil in its natural state. To determine the frontal resistance to punching Р p , let's represent it in differential form: Taking into account expressions (3)-(6) after intermediate transformations let's obtain: then, after integrating expression (11), let's obtain: The transverse (deflecting) force Р х was determined in a similar way. Its differential expression is: . (13) Integrating (13) let's obtain: If E D k = 2, then this deflecting force would be: The dependences of the forces of frontal resistance of the soil (12) on the magnitude of the sharpening of the asymmetric cone in the form of the ratio of its radius of the base, equal to half the diameter of the piercing head to the height for different soils are shown in Fig. 2. The dependences of the lateral (deflecting) force Р х of the frontal resistance of the soil (14) on the value of the displacement of the apex of the asymmetric cone for different soils and diameters of piercing heads are shown in Fig. 3. The graphs shown in Fig. 2, 3 were obtained for the most common types of grants, in which a communication cavity can be formed by piercing them with a static method. The calculations included the following initial data [17]: soil deformation moduli: sandy loam Е s = 1.36 MPa, loam Е s = 0.892 MPa; clay Е s = 0.63 MPa. Friction forces of soils against steel are also laid down: for sandy loam f = 0.532, for loam f = 0.424, for clay f = 0.325. As can be seen from the graphs in Fig. 2, with an increase in the sharpening of the cone within 50°, the resistance to soil puncture decreases by 2.2-2.5 times. And from Fig. 3 it can be seen that when the deflection of the cone is x k from 0.02 m to 0.08 m, for example, with a borehole diameter of 0.2 m, the value of the deflecting force increases almost threefold. It is also seen that the greatest resistance and deflecting force will be when piercing solid sandy loam. This is 3.6 times more than in hard-plastic clay and 2 times more than in semihard loam. The results obtained can be waste wood when choosing the technical characteristics of power plants in accordance with the technological tasks and the soil environment in which engineering communications must be laid. 2. Determination of the deflection value of the head with an asymmetric cone during its advance in the soil mass In the process of rotation of the conical asymmetric tip around its axis and the action of an axial force on it, the tip moves along a rectilinear path. After the termination of the rotational movement, the tip, under the action of the soil reaction, is deflected towards the fixed direction of displacement of the cone. As it was established in [17], its maximum deviation depends on the design parameters of the tip and pushing the rods, their mechanical properties and physical and mechanical properties of the soil, is pierced. For this, the drive rod end is regarded as a beam on a resilient base on which the reaction of the soil acts. The equation for determining the deflection (deflection) for such a problem has the form [19]: where Е b -modulus of elasticity of the rod during bending (E b = 2•10 7 N/cm 2 ); І -the moment of inertia of the crosssection of the tip; β п -coefficient depending on the ratio of the stiffness of the rod and the elastic foundation (soil), [19]: where k b -the bed (base) coefficient (for soils of medium density, k b = 5-50 N/cm 2 ), [20]. For an annular section of the working body and taking into account dependencies (14) and (16), equation (15) after re-adjustment will take the form: where γ = D d -the ratio, respectively, of the outer diameter of the tip to the inner diameter. According to dependence (14), the deflecting force will reach its maximum value if the cone is deepened to its entire height. In this case, the tip will deviate from the trajectory of rectilinear motion by a distance ∆ r , (17). To evade the tip from the rectilinear motion path for a certain distance, it needs to travel a length without torque transmission (without rotation). Their ratio depending on the displacement of the cone can be written as: From the obtained graph (Fig. 4) it can be seen that with an increase in the displacement value of the tip of the tip cone, the deviation of the trajectory increases. The greatest deviation from the straight trajectory of movement when piercing the head is achieved with a greater deviation of the apex of the asymmetric tip compared to the symmetrical one, that is, with a smaller value of the displacement х k . It is also possible to see that the smaller the angle of sharpening of the tip cone, the less will be the deflection of the working member. Thus, depending on the physical and mechanical properties of the soil and the elasticity of the push rod, it is possible to determine the rational value of the deflection of the tip of the asymmetric cone of the tip by piercing the head. 3. Establishment of the zone of destruction and determination of the force of soil pressure on the adjacent underground communications The conical tip, in which the vertex is offset by х k relative to the center of the base, has a variable length of the cone, which depends on the variable radius of rotation R j (Fig. 1). The radius R j is determined from ∆OMK based on the cosine theorem: This radius changes with the height of the cone: where R D = 2 -radius of the base of the cone; H -height of the cone. Lets determine the size of the soil destruction zone by an asymmetric cone on the basis of the law of equality of soil masses before and after a puncture: where ρ n -natural soil density to destruction; ρ av -average soil density in the destruction zone after puncture [6]: where λ = 4.0...6.0 is a coefficient that depends on the type of soil and the depth of the puncture [6]. From equation (2), taking into account (3), after intermediate transformations, let's determine the size of the soil destruction zone: That is, the destruction zone depends on the angle of rotation j, which means that the density of the soil in each longitudinal section depends on its angle of rotation and the distance from the base of the cone. The maximum value of the destruction zone will be at z = 0. In this case, if х k = 0; D R p = 2λ . х k = R and j = 0; λ Thus, if the top of the cone is displaced by the value x k = R, then its destruction zone is 2 times larger than the destruction zone of a symmetric cone with the same base radius. Determine the soil pressure of the asymmetric cone on underground utilities. In the first approximation, the regularity of the change in density ρ х can be assumed to be linear depending on the distance from the side wall of the hole along the fracture zone to the value of D p [5] (Fig. 5): where ρ max -maximum soil density in the sidewall of the well. Considering that: Then the pressure of compacted soil on underground utilities is equal to: The maximum pressure on underground communications will act if x = 0: If the average soil density in the destruction zone is, according to experimental data for sandy-clayey soils, ρ av = (1.05...1.1) ρ n [5], then the maximum pressure on underground utilities is q max = (0.09...0.17) E s . Discussion of the results of the study of the process of controlled soil puncture with an asymmetric conical tip The obtained understanding of the processes of soil puncture made it possible to establish the calculated dependences for determining the components of the soil resistance forces from the advance of the tip in the massif -the frontal resistance (12) and the lateral force (14). As the calculations of the frontal resistance of the soil puncture have shown (Fig. 2), the process involves the application of significant forces on the equipment. The greatest resistance occurs when piercing a semi-hard sandy loam, which is 3.6 times more than in a refractory clay and in fact 2 times more than in a semi-hard loam. It was also found that with an increase in the sharpening of the cone within 50°, the resistance to puncture of the soil decreases by 2.2-2.5 times. The obtained analytical dependence (14) for determining the deflecting force made it possible to determine (Fig. 3) that when the deflection of the cone is from 0.02 m to 0.08 m with a borehole diameter of 0.2 m, the value of the deflecting force increases by almost three times. When determining the deviation of the working body with an asymmetric conical tip (18), the rigidity of the working body structure and the properties of the soil medium in the form of the bed coefficient were taken into account. From the above graph (Fig. 4) it is shown that the deviation value may differ by more than 4 times depending on the displacement of the top of the cone and the types of soil that is pierced. The definition of the soil compaction zone around the asymmetric tip and the destructive effect from the stress state of the soil on the adjacent communications or other underground structures was established using the law of conservation of soil masses before and after compaction. The calculated dependence for determining the compaction zone for various parameters of the asymmetric cone and soil conditions is described by formula (23), and the pressure of stressed soil on nearby communications is determined by formula (27). An understanding of their significance is important for the practical use of static soil puncture when laying engineering communications along a complex trajectory. All the dependences obtained take into account the parameters of the asymmetric conical tip and the normative data on the physical and mechanical properties of soils and are analytical in nature. This is the main difference from other studies that have been devoted to the process of ground penetration and are of an empirical nature, which limits their scope of use [5,8,15,16]. The proposed analytical dependences make it possible to conduct a comprehensive qualitative analysis of the influence of the main factors that determine the features of the soil puncture by an asymmetric cone. The practice of using installations for static soil perforation and theoretical and experimental studies of other authors [5,8,15,16,[21][22][23] testify to the correspondence of the calculated values obtained to the real data. The results obtained regarding existing machines and installations for trenchless laying make it possible to improve their technological processes and working equipment. The re-commendations are limited to the conditions of development of thawed soils, its most common types. They do not relate to the conditions for laying communications in frozen soils, in soils of increased hardness, high humidity and in sands. Their development requires additional means of intensifying work processes, which require appropriate further research. Conclusions 1. The provided theoretical substantiation and the regularities of the soil puncture process were established, at the basis of creating dependencies for determining the components of the soil resistance force to the advancement of the asymmetric conical tip of the working body. The obtained analytical dependencies made it possible to determine the influence of the determining factors on the process, such as the design parameters of the working equipment and the physical and mechanical properties of the soil environment. It was found that with an increase in the sharpening of the cone within 50°, the resistance to soil puncture decreases by 2.2-2.5 times. When the deflection of the cone is from 0.02 m to 0.08 m with a borehole diameter of 0.2 m, the value of the deflecting force increases almost threefold. The greatest resistance and deflecting force will be when piercing hard sandy loam, which is 3.6 times more than in refractory clay and in fact 2 times more than in semi-hard loam. 2. The obtained calculated dependence for determining the deflecting force made it possible to determine the amount of deflection of the head with an asymmetric conical tip. At the same time, the design features of the soil-piercing head and the pushing rod were taken into account, the properties of the material from which they are made and the physical and mechanical properties of soils are pierced. It was found that effective control of the head movement in the soil is possible if the angle at the tip of the tip cone is less than 50°. This is a condition for creating a core of the seal, which in its shape approaches a symmetrical cone, on which forces in space are balanced, and which cannot affect the deflection process of the tip. 3. A theoretical substantiation of the process of creating a soil destruction zone around the asymmetric tip is given and its dimensions are determined. It was found that on the opposite side of the displacement of the cone, this zone is almost 2 times larger in comparison with the usual tip in the form of a symmetrical cone. The dependence for determining the regularity of the change in soil pressure at a distance from the side wall of the hole along the destruction zone is based on a linear equation. On this basis, the values of the maximum soil pressure on adjacent communications were obtained, which is determined by the volumetric deformation of each type of soil, which reaches 0.06 MPa in hard-plastic clay and 0.09 MPa in solid sandy loam. Acknowledgement The team of authors expresses sincere gratitude to the staff of the Kharkiv National Automobile and Highway University and the Rivne National University of Water Management and Environmental Management, who, together with the authors, take part in scientific research to improve machines and installations for trenchless laying of underground communications.
6,381.2
2021-04-20T00:00:00.000
[ "Materials Science" ]
Electric Aharonov-Bohm effect without a loop in a Cooper pair box We predict the force-free scalar Aharonov-Bohm effect of a Cooper pair box in an electric field at a distance without forming a closed path of the interfering charges. The superposition of different charge states plays a major role in eliminating the closed loop, which is distinct from the original topological Aharonov-Bohm effect. The phase shift is determined by the charge-state-dependent local field interaction energy. In addition, our proposed setup does not require a pulse experiment for fast switching of a potential, which eliminates the major experimental obstacle for observing the ideal electric Aharonov-Bohm effect. I. INTRODUCTION A charge moving in an external electromagnetic field exhibits topological quantum interference known as the Aharonov-Bohm (AB) effect 1 . An intriguing aspect of the AB effect is that the appearance of a phase shift does not require local overlap of the particle and external field. For this reason, the AB effect has been regarded as a pure topological phenomenon that cannot be described in terms of the local actions of physical variables. This property also implies that a loop geometry is essential for its observation. The Aharonov-Casher (AC) effect 2 , a related topological quantum phenomenon, describes the phase shift of a neutral particle with a magnetic moment moving around a charged rod. The AC effect can be regarded as the dual of the magnetic AB effect in that the roles of the charge and magnetic flux (moment) are reversed (see, e.g., Refs. 3 and 4). For the AC effect, a closed loop of the particle's path is not always required for its realization. For instance, the AC phase shift can be observed in the interference of two opposite magnetic moments of a neutral particle without dividing the particle's path. This "loop-free" interference had been predicted by Anandan 5 before the prediction of the topological AC effect and was experimentally demonstrated 6 . Interference can be achieved, as the AC interaction Lagrangian (and the phase accumulation) depends on the magnetic moment. Together with loop-free AC interference, the duality of the AB and AC effects poses an interesting question as to whether we can find an AB analogue of loop-free interference. Exchanging the roles of the magnetic moment and electric charge in the loop-free AC effect, the corresponding loop-free AB effect should appear. Basically, this is possible in a superposed state of different charges, as the AB interaction depends on the charge of the interfering particle. However, two key issues should be resolved in order to achieve loop-free AB interference. First, an ordinary charged particle cannot form a superposition of different number states, and is inappropriate for our purpose. This problem can be overcome by utilizing a superconducting condensate composed of the superposition of different numbers of Cooper pairs. The second problem is that loop-free interference cannot be described by a potential difference across two different positions (which is the case in the topological AB effect), as the test particle's wave packet would neither split nor form a closed path. Instead, the phase shift should be determined by the electrostatic energy difference (in the electric AB effect) between different charge states at the same position. We will show that this energy difference is described by the geometric potential, defined on the basis of the "Lorentz-covariant field interaction (LCFI)" approach 4,7 , whereas the magnitude of the energy difference is ambiguous in the conventional potential-based framework. A single Cooper pair box (SCB) (see, e.g., Ref. 8), composed of a superposed state of two different charges, is an ideal system for its observation. We predict a loop-free electric Aharonov-Bohm (EAB) effect in an SCB; a relative phase shift between two charge states appears in an SCB influenced by an external electric field at a distance. The magnitude of the phase shift is proportional to difference in the electrostatic energies between the two charge states. In addition, we point out that the EAB effect under the ideal condition -that the charge and external field does not overlap -can be more easily realized in this setup, as it does not require fast switching of a potential, the major technical difficulty for realizing the original force-free EAB interference in a two-path interferometer. II. FIELD INTERACTION APPROACH TO THE ELECTRIC AHARONOV-BOHM EFFECT. Let us begin by briefly reviewing the original EAB effect in an ideal situation (see Fig. 1(a)). The wave packet of an incident particle with charge q splits into two parts and enter long Faraday cages. In each part, the electrical potential V i (i = 1, 2) is switched on after the wave packet enters the cage. The duration of the voltage pulse should be sufficiently short to ensure that each potential is switched off far before the particle exits. The purpose of this arrangement is to avoid local overlap of the particle and electromagnetic field. Let the wave function ψ 0 (x, t) = ψ 1 (x, t) + ψ 2 (x, t) in the absence of the potentials, where ψ 1 and ψ 2 represent the two parts. The electric potential modifies the wave function as ψ(x, t) = ψ 1 (x, t)e −iq V1dt/ + ψ 2 (x, t)e −iq V2dt/ , (1) and the interference fringe is determined by the phase difference This EAB effect can also be described in an alternative LCFI approach 4,7 . The essence of this approach is summarized as follows. (i) The Lagrangian governing the interaction between a charge and an external field is universally represented by the local overlap between the external field and that generated by the charge, together with the incorporation of the Lorentz covariance (principle of relativity). (ii) This Lagrangian reproduces the results derived from the potential-based approach for the classical equation of motion and the topological AB effect 4,7 . In this framework, the electromagnetic interaction between the two entities is given by the general Lorentz-covariant Lagrangian where F µν and F (q) µν are the external electromagnetic field tensor and that generated by the charge, respectively. In our arrangement of the charge with an external electric field, the interaction Lagrangian is reduced to where U q denotes the energy produced by the interaction between two electric fields: the external E and E q produced by the moving charge. Fig. 1(b) shows a possible configuration of the external electric field when the potentials V 1 and V 2 (of Fig. 1(a)) are switched on. The essential condition of a nonoverlapping particle and E is satisfied. Nevertheless, their interaction is manifested in the overlap of E with E q (not with the position of the charge) in the Lagrangian of Eq. (4). The moving charge q with speed u along the x axis generates the electric field For the lower path (region I), we find from Eqs. (4) and (5) that where V 0 = E · dx = V 1 − V 2 is the potential drop across the two regions. Notably, L in is independent of the speed of the moving charge. Similarly, for the upper path (region II), we obtain Therefore, the phase difference accumulated by the interaction is demonstrating that the EAB phase shift (Eq. (2)) is reproduced in the field interaction approach. Here, we have considered infinite planar conducting plates generating the potential difference, but the phase shift in Eq. (8) can be verified for an arbitrary geometry of E with the potential difference V 0 across the two regions. III. ELECTRIC AHARONOV-BOHM EFFECT IN A COOPER PAIR BOX AND THE GEOMETRIC POTENTIALS. Next, we demonstrate how the loop-free EAB effect appears in a superposed state of different charges. An SCB 8 , an artificial two-level quantum system composed of superconducting circuits, is an ideal system for realization of the loop-free EAB efffect. Its quantum state |ψ 0 (t) is composed of a superposition of two different charge states |q and |q ′ : where q − q ′ = 2e, implying that |q contains an extra Cooper pair than |q ′ . As discussed above, the charge in the SCB interacts with an external electric field (E) at a distance (see the various configurations shown in Fig. 2). The quantum dynamics of an SCB is more complicated than the case of the EAB effect in a two-path interferometer. Nevertheless, it is instructive to analyze the limit of negligible charge transfer between |q and |q ′ . In this limit, the quantum state evolution (modified by the distant electric field) is equivalent to that of a two-path interferometer. V is the scalar potential at the position of the SCB. Switching the voltage is not required here, in contrast to the original EAB setup of moving charges. The relative phase shift is given by The problem with this result is that, unlike the original EAB phase given in Eq. (2), the phase shift of Eq. (11) remains undetermined, as V at a single position is not a quantity with a definite value. This ambiguity is removed by considering the difference in the field interaction energies of the two charge states. The interaction energy between charge q and E is evaluated from Eq. (4) and can be rewritten in the instructive form where is a type of scalar potential determined by the overall distribution of E. This "geometric potential" (V G ) plays a similar role as an electric scalar potential but is different from the latter, as V G at a given position is uniquely determined by the distribution of E. From Eq. (12) we obtain the state evolution and the well-defined relative phase This constitutes the EAB effect in an SCB without forming a loop of the particle's path. As the EAB phase shift is entirely determined by V G , it will be useful to evaluate its values for various cases (see Fig. 2). First, consider a capacitor composed of a pair of two infinite parallel conducting plates ( Fig. 2(a)). The field interaction energy is equivalent to that obtained in Eq. (6), and we find that where V 0 is the voltage drop across the two plates. Similarly, in the presence of two such parallel capacitors ( Fig. 2(b)), for the two voltage drops V 0 and V ′ 0 (measured from the SCB position) across the upper and lower capacitors, respectively. In fact, V G can be evaluated for a capacitor with an arbitrary shape (Fig. 2(c)). The external field can be written as E = E 0n , wheren is the unit vector perpentdicular to the capacitor surface, and Eq. (12b) is reduced to By noting that the potential drop across the capacitor can be expressed as we obtain the relation where Ω is the solid angle formed by the capacitor geometry. In all cases of Fig. 2, the EAB phase shift in Eq. (14) is determined by the geometry-dependent potential V G and not by the voltage difference V 0 . This result is in contrast with the original EAB effect where only the voltage difference of two interfering paths matters. The EAB effect discussed above can be demonstrated in a realistic SCB circuit (see Fig. 3). An experimental SCB circuit is controlled by the gate voltage V g . The circuit has two capacitances, the junction capacitance C J and gate capacitance C g . A new component here is the inclusion of the geometric potential (V G ) associated with the electric field that is spatially separated from the circuit. The electrostatic energy of this system has the form where V J is the voltage across the tunnel junction. Josephson coupling leads the relation between the phase variable (φ) and where C Σ = C J +C g is the total capacitance. Including Josephson coupling (with the constant E J ) as well, the Lagrangian of the system is given by (omitting constant terms) (19) Adopting the standard procedure of the Legendre transformation, we obtain the Hamiltonian where E c = (2e) 2 /2C Σ is the charging energy of a single Cooper pair. The number of excess Cooper pairs (n) of the island satisfies the commutation rule [φ,n] = i and is limited ton = 0, 1 in an SCB. The effects of the geometric and gate voltages are included in the variables n G = C Σ V G /2e and n g = −C g V g /2e, respectively. The eigenvalues of this Hamiltonian are therefore, the evolution of the quantum state depends on V G . The EAB effect can be probed by investigating the V G dependence of this spectrum. A possible experiment requires the standard SCB circuit 8,9 with the incorporation of an external electric field spatially separated from the circuit. The EAB phase shift can be extracted from the V G dependence of the Rabi oscillation and the energy splitting. IV. DISCUSSION. Let us discuss several notable aspects of our results. First, the loop-free EAB effect in an SCB cannot be properly accounted for by the conventional scalar potential, as the latter does not provide a definite value at a given position (see Eq. (11)). The effect is instead described by the geometric potential V G (Eq. (12b)), which is determined by the geometry of the external field distribution and not simply by the potential difference between different positions. In other words, the EAB effect in our arrangement is understood only by specifying the local overlap of the external field and the field generated by the interfering charge (Eq. (4)), demonstrating the locality of the interactions. Second, consider charge redistribution on the conducting plate induced by the SCB charge q (Fig. 4). This may influence the interaction between E q and E, and its consequences should be clarified. For simplicity, the conductor is assumed to be ideal; the charges on its surface are free to move in response to q. A naive expectation would be that the field E q generated by q is compensated by the field E i generated by the induced charges; E i + E q = 0. If this is the case, the interaction between E q and the external field E would be completely removed. This would result in the disappearance of the EAB effect in the SCB. However, this naive expectation is incorrect, as the quantum nature of E i is not taken into account. In a quantum mechanical treatment, only the expectation value of E i + E q vanishes, whereas the interaction between q and the external field is not shielded at all, as we show below (see also Ref. 4). The charged particles in the conductor contribute to the field interaction Lagrangian as where q j and V G (r j ) are the charge and geometric potential (defined as Eq. (12b)) at position r j on the conductor. For a capacitor with an arbitrary shape (Fig. 2(c)), the geometric potential (Eq. (17c)) depends only on the solid angle formed by the capacitor and is independent of r j . Therefore, the interaction Lagrangian L ′ in = −( j q j )V G is independent of the redistribution of the particles in the conductor, implying that charge redistribution does not affect the interactions between E q and E at all. The EAB interference is unaffected by the induced charges of the conductor. We can equally apply this argument to the original topological EAB effect. In addition, the interaction between E q and E i also does not affect the EAB effect, as it is independent of E. Third, although our study is focused on the simpler EAB effect, it is also possible to demonstrate a magnetic AB effect without a loop. Moving particles with superposed charge states are necessary to achieve it. This can be realized, for example, by utilizing the Andreev reflections in superconductor-metal hybrid junctions 10 . Finally, note that the EAB experiment of the original form (as in Fig. 1(a)) has never been performed. This is primarily because its realization would require extremely fast switching of the electric potential at one of the Faraday cages placed along the path of the charged particle (see, e.g., Ref. 11). This technical difficulty does not exist in our SCB analogue. The interference of the two different charge states instead of two spatially separated paths is manifested in the qubit's interaction with a static external field at a distance. The elimination of the requirement of fast switching of the electric potential would enable much easier realization of the ideal forcefree EAB effect. V. CONCLUSION. In conclusion, we have predicted the scalar AB effect without a loop in a Cooper pair box interacting with an external electric field at a distance. The superposition of different charge states eliminates the requirement of loop geometry for the interferometer. The phase shift is given by the charge-state dependence of the field interaction energy and is universally represented by the geometric potential. Our proposal provides an easy way to realize the ideal EAB effect, as the setup does not require a pulse experiment for fast switching of the potential, which has been the major technical obstacle for its observation.
3,990
2018-05-07T00:00:00.000
[ "Physics" ]
Smart and Adaptive Architecture for a Dedicated Internet of Things Network Comprised of Diverse Entities: A Proposal and Evaluation Advances in 5G and the Internet of Things (IoT) have to cater to the diverse and varying needs of different stakeholders, devices, sensors, applications, networks, and access technologies that come together for a dedicated IoT network for a synergistic purpose. Therefore, there is a need for a solution that can assimilate the various requirements and policies to dynamically and intelligently orchestrate them in the dedicated IoT network. Thus we identify and describe a representative industry-relevant use case for such a smart and adaptive environment through interviews with experts from a leading telecommunication vendor. We further propose and evaluate candidate architectures to achieve dynamic and intelligent orchestration in such a smart environment using a systematic approach for architecture design and by engaging six senior domain and IoT experts. The candidate architecture with an adaptive and intelligent element (“Smart AAA agent”) was found superior for modifiability, scalability, and performance in the assessments. This architecture also explores the enhanced role of authentication, authorization, and accounting (AAA) and makes the base for complete orchestration. The results indicate that the proposed architecture can meet the requirements for a dedicated IoT network, which may be used in further research or as a reference for industry solutions. Introduction Recent advances in 5G [1][2][3][4] and the Internet of Things (IoT) have demonstrated that the expectations and requirements for the next generation of communication systems can be defined as a network of heterogeneous devices, applications, sensors, access technologies, and stakeholders which come together for a common purpose. Diverse cyber-physical objects as part of the collective requirement shall have to be modelled together as part of a dedicated IoT network [5,6]. This network shall have to support heterogeneity among the different stakeholders and services [6,7]. Therefore, it is very challenging to perceive all the different requirements and the stakeholders that may come together for a common purpose at one point in time. Connecting a kind of device (such as a smartwatch, a health monitor, and other similar perceived IoT devices) to a network may not require connecting it to the internet comprising of a diverse set of entities. Rather a dedicated IoT network may require diverse users, devices, sensors, applications, communication service providers, and other different stakeholders coming together for a common purpose. An IoT network required for a use case such as online gaming would require a software defined network supporting a smart and adaptive environment to handle its various requirements. By virtue of its name, this internet or network of things is an evolving and heterogeneous entity [2,8]. 1. Describe the role of different communication service providers, applications, domain providers, and other service providers in such a setup. 2. Architect a system that can meet the requirements of the above scenario. 3. Evaluate the proposed competing architectures' functional suitability, performance, and modifiability. The primary purpose of this research is to identify and evolve a representative use case from the industry, get change scenarios from the domain experts, propose candidate architectures and perform a scenario-based software architecture analysis of the existing system and the proposed architectures. The evaluation is focused on the maintainability, functional suitability, and performance of the proposed architectures. Furthermore, the development effort for the candidate architectures under different scenarios was estimated using expert judgement. As expected, different architectures are suitable for different sets of requirements. However, it is evident from the analysis that the "smart AAA agent" fulfills the largest set of requirements and change scenarios, requires similar initial effort to implement and requires significantly less effort to maintain and evolve. The Gaming Inc use case was chosen by the industry experts primarily for its business significance [19]. Furthermore, it meets the requirements of needing a smart and adaptive environment in a dedicated IoT network. This use case also contains the basic requirements of an IoT use case with different sensors, such as a heart rate sensor, facial expression sensor, vibration sensor, a gyroscope sensor, and other sensors required for providing a real-life experience to the gaming user. The use case was further evolved during the interview and workshop with industry experts. We identified the additional requirements of multi-stakeholder support (i.e., a gaming user could freely choose their operator providing the high-speed bandwidth for the game). For example, a user should be allowed to have a gaming contract with the gaming company and should not be bound to choose a communication service provider for it. The gaming company should have a separate contract for the high-speed bandwidth with communication service providers. The user should also be able to get seamless gaming services over the different mediums of the internet. Other stakeholders can also provide content, devices, and sensors for the gaming company. There could be a separate service provider for additional services such as billing-as-a-service. This use case also requires autonomic network management [20] based on the stakeholder policies and the current context of the gaming user, such as their heart rate, geographical location, facial expressions, and many more. It also requires efficient monitoring of the entities in the dedicated IoT network for their quick assimilation and context-based changes. The following are the main contributions of the paper: C1. Described a representative industrial use case: Identification, exploration, evolution, and presentation of an industrial use case that is representative of a dedicated IoT network with major requirements as a part of enhanced mobile broadband spectrum and making it available for the research community. This is an important contribution since, as described above, the gaming domain will have significant future growth, and it presents unique challenges for a dedicated IoT network. Following a systematic approach (Section 3), we have further elaborated the use case in several fundamental ways, which include: the addition of a multi-stakeholder perspective, the ability to have different communications service providers and contracts, scalability to accommodate additional sensors, efficient monitoring and autonomic network management, and billing-as-a-service. C2. Proposed candidate architectures: Two candidate architectures for addressing the challenges and meeting the requirements of the identified use case. C3. Evaluated the proposed architectures: Scenario development and detailed scenariobased architecture analysis of the proposed candidate architectures and systems with leading domain and IoT experts. Moreover, expert-judgement based effort analysis of the candidate architectures. The remainder of the paper is structured as follows: Section 2 briefly presents the related works. In Section 3, we describe our research methodology. Section 4 discusses the threats to validity in the study. Section 5 presents the study results in terms of the three main contributions (C1-C3) of the paper. Section 6 presents a discussion of the results and the updated architecture based on the evaluations. Section 7 concludes the study and describes possible future work. Related Work Dedicated IoT networks comprise diverse cyber-physical entities coming together for a common cause. An existing research paper [5] elaborates that there can be different physical and virtual objects in an IoT network and how the modeling of real life entities into a virtual object is a major challenge that needs to be addressed. Vlacheas et al. [6] discuss the major challenges and issues in a dedicated IoT network and enlists some such as the heterogeneity among connected objects and the unreliable nature of associated services. The authors also suggest a cognitive management framework, and exemplify it with a smart city model. The mobile cloud gaming report [19] mentions the scope of cloud gaming as a big opportunity for 5G and IoT. We leverage on this report and evolve the gaming use case from the industry in discussions with experts by bringing in the multi-stakeholder and multi-service provider model. The rapid evolution of such networks with the evolution in technology and uses has also been discussed in the literature [2,8]. The role of middleware or an agent-based architecture has also been advocated by other researchers and experts [11][12][13]. Therefore, in this paper, we report the proposed candidate architectures having distributed agent-based architecture and their evaluation. Furthermore, the need for context-aware dynamic behavior, including authentication, authorization, and accounting, has been discussed extensively in research artifacts such as [14,15,18]. A distributed, decentralized edge-/fog-/cloud-based architecture [21][22][23] has been discussed and evaluated for some IoT scenarios. However, there is a requirement of architecture and its evaluation for heterogeneous devices communicating over heterogeneous networks with various service providers coming together for a dedicated IoT network. Such an architecture also needs to consider the highly evolving nature of requirements and diverse stakeholders that shall come together as part of the network. In this paper, we identified and elaborated the needs for such a business use case, proposed and evaluated candidate architectures to fulfill the requirements of the case using systematic and rigorous architectural design and evaluation approaches. Several reference architectures have been proposed for IoTs [24]. The most relevant for our work is the European Telecommunications Standards Institute's Machine to Machine (oneM2M TS-0003) (ETSI-M2M https://www.etsi.org/technologies/internet-of-things, accessed on 18 January 2021) that provides a high-level security architecture. Their proposed three-layered architecture handles authentication, identity management, authorization, and security administration in the security functions layer. Our work complements this standard by adding an architecture that would allow multiple service providers to collaborate in a dedicated IoT network. In the literature, several frameworks for developing IoT systems are proposed [25,26]. However, this is not the focus of our work. From the perspective of the current study, IoT systems leverage the features of a dedicated IoT (for an overview of existing research on authentication and authorization for IoT at the application layer, please refer to a review by Trnka et al. [27]). As part of our research the proposed smart AAA agent, and the static AAA agent distributed architecture for dedicated IoT networks will provide a baseline for the development of novel applications and user experience. Methodology We now briefly describe our systematic approach to arrive at the three main contributions (as listed in Section 1). Our approach is based on Hofmeister et al. [28]'s general model of software architecture design. We have used their approach as it has synthesized five of the leading industrial approaches for architecture design. Figure 1 provides an overview of our approach and annotates the main contributions. [28] approach for architecture design. C1-Describing the Representative Use Case We used two main sources to gather the requirements for a representative use case of a dedicated IoT network, which includes: industrial whitepapers (e.g., the TM forum for 5G monetization [29]) and interviews with various domain experts from the industry. We chose an online gaming use case from the "5G IoT lab" of a leading telecommunications vendor since online cloud gaming is recognized as a promising business opportunity [19] by the industry experts. The use case was evolved based on the interviews with four leading industry experts. These experts provided various perspectives of key stakeholders due to their experience and current roles. Brainstorming with the experts helped evolve the use case with multiple practical aspects in a real-world IoT scenario. These discussions helped evolve the use case beyond the boundary of one communication service provider and one gaming provider. It emphasized the role of different services, content, sensors, and devices that different stakeholders can bring about in such a system to make it a more meaningful, smart and adaptive dedicated IoT network. These interviews helped develop a multi-stakeholder and service provider model. The interviews were structured as four 90-min, one-on-one workshops with the industry experts. We followed the guidelines collected by Runeson and Höst [30] to design and conduct the interviews. A detailed presentation of the use case from the '5G IoT lab' was made as a baseline. This was followed by a discussion of limitation and improvement suggestions by the experts. All interviews were recorded and later transcribed. All improvement suggestions and scenarios identified by the analysis of the transcribed interviews were sent back to the experts for validation. Only one of the four experts added some additional reflections. Then the validated input from all individual experts was consolidated. We used Zachman framework to break down all the requirements and policies for the dedicated IoT network use case in terms of context aware authentication, authorization, and accounting requirements, looking at AAA in an enhanced perspective taking care of the complete requirements. Zachman framework has been used in implementing enterprise architectures [31,32]. It uses primitive interrogatives what, how, where, who, when, and why to describe the desired system behavior [31,32]. Table 1 only exemplifies the corresponding dynamic AAA requirements based on the Zachman framework (the results of the study are reported in Table 2, where the color coding of the text corresponds to that of the basic interrogative). The updated use-case and associated scenarios based on the consolidated input from all experts was again reviewed by all experts. Premium Gaming Customer's new sensitive information such as heart rate acquisition of an existing customer of a partner communication service provider (CSP) is to be done in the Gaming Inc cloud/server using the secure REST interface and following security policy when a new request is received at Gaming Inc server for bio feedback-based gaming because it is very important, sensitive and urgent information. S2 Access of premium Gaming Customer sensitive information such as heart rate of an existing customer of a partner CSP is to be conducted from the Gaming Inc cloud/server using the secure REST interface and on encrypted state/ range information as required by the game, rather than the original heart rate value when a new request is received at Gaming Inc server for bio feedback-based gaming because it is very important, sensitive and urgent information. S3 Existing premium Gaming Customer Network slice information provisioning of an existing customer of Gaming Inc is to be conducted in a new partner communication service provider using the dedicated NSSAI interface when a new partner Communication Service provider onboards the partnership because it is very important, and operational information for high-speed gaming service delivery and monetization. S4 Gaming app to require lower level of configured security for access of gaming app consumer and their device in home location using the secure REST interface when the consumer is trying to access game and has pre-registered it as home location in the system because it is important for security and ease of access in the system. S5 Gaming app to require lowest level of configured security for access of gaming app consumer and their device in home location and the usual pattern of time for the customer using the secure REST interface when the consumer is trying to access game and has pre-registered it as home location in the system and the system has learned the usage pattern and classified it safe because it is important for security and ease of access in the system. S6 A new SP being introduced in the dedicated IoT network by the IoT network administrator in admin office location using the secure REST interface should be performed on demand and seamlessly to allow the intelligent enterprise integration and enhancement. S7 The event records to be sent to billing as a service provider for the consumption of media content from the new content provider just introduced in the dedicated IoT network by the IoT network administration in smart AAA agent and IoT network administration using the secure REST interface should be conducted on onboarding of a new content provider and seamlessly to allow the intelligent enterprise integration and enhancement. S8 Event records to be split, merged or duplicated for a stakeholder having a different configuration of time zone, calendar and cycles in smart AAA agent and IoT network administration using the secure REST interface should be conducted on onboarding or any corresponding change for a stakeholder and seamlessly to allow the intelligent enterprise integration and enhancement. C2-Developing the Candidate Architectures As suggested by Hofmeister et al. [28], we started with the business use case of enhanced AAA in a dedicated IoT network for gaming (derived using the approach described in Section 3.1). In the next step (see "requirement analysis & evolution" step in Figure 1), we analyzed these requirements for architecturally significant requirements (requirements that need to be considered when designing the system's architecture). Domain experts were consulted (via interviews and workshops) to identify and prioritize quality characteristics of importance for the given business use case and the context. We used these quality characteristics to identify architecturally significant requirements by analyzing their impact on the ability of the system to fulfill them. Next, we performed architectural synthesis (see Figure 1), where we took decisions about the architectural styles, and specified the composition of the structural and behavioral elements of the systems. We consulted domain experts (workshop with an industrial chief software architect) and used our own experience for this synthesis. This resulted in two main candidate proposals: (a) static-rule-based agent and (b) a dynamic smart AAA agent. C3-Architecture Evaluation As the primary purpose of this research is to identify and understand the industry perspective for a dedicated IoT network and evaluate the existing and proposed candidate architectures and systems, we employ a scenario-based architecture analysis method (SAAM) [28,[33][34][35]. SAAM has been extensively used for evaluating architectures in different domains [36][37][38]. The detailed workshop with the industry experts was conducted based on SAAM. Both the candidate architectures have been evaluated for the chosen scenarios representative of the key requirements of the dedicated IoT network bringing in the smart and adaptive environment. After assimilation of the inputs from the different experts a follow up workshop was conducted to ascertain the validity of the cumulative inputs from all of them. The main steps performed during the workshop were as follows: 1. The candidate architectures were explained to the experts. 2. Scenarios were developed based on the chosen use case as in Section 3.1 above. 3. Each scenario was evaluated for both the candidate architectures keeping into perspective the functional suitability, performance, and modifiability quality parameters. The transcripts of the workshop were shared with the experts for validation. The duration of each of these workshops was around 90 min. A follow-up workshop was conducted after assimilation of all the inputs and evaluations for the final consensus from the experts. Evaluation goals: Software and system architecture analysis was performed in this study, taking the major quality characteristics of functional suitability, maintainability and performance into consideration. Therefore, the following parameters based on the ISO 9126 and 25010:2011 standards [39] were used for evaluation. The product quality characteristics considered are: (1) maintainability, (2) functional suitability, and (3) performance. There definitely can be many other important quality characteristics such as security and reliability expected from a mature system. However, as the focus of this research paper is to provide candidate architecture for taking care of the disparate requirements of the different stakeholders coming together in a dedicated IoT network, we shall perform the evaluations on the decided quality characteristics. The effort for setup and enhancements of the proposed candidate architectures under different scenarios is also evaluated as part of this study for analyzing the applicability and suitability of the architecture and system under different requirements. Effort estimation was conducted keeping the following into perspective: (1) initial setup, i.e., the upfront cost of moving from the current way of working to the candidate solution proposed in this paper, and (2) change scenarios, which encapsulate the expected changes with a likely impact on the architecture of the system. Leading industry and domain experts were interviewed to discuss the scenarios and change scenarios and perform the scenario-based software architecture analysis of the candidate systems and architectures. A total of eight change scenarios were grouped into six groups and evaluated against the key parameters of maintainability, functional suitability, and performance. The proposed candidate architecture was also evolved with the help of an industry software architecture expert. Effort analysis of two selected change scenarios was evaluated by a couple of expert program portfolio managers from the industry. These industry experts were chosen based on their expertise and familiarity with this use case and their prior involvement in similar tasks, to avoid any ambiguity and difference in understanding. For the effort analysis with the experts, the project management body of knowledge (PMBOK) [40] was taken as a reference and a detailed work breakdown structure was created for the two chosen change scenarios. Further effort analysis has been performed, and the result is based on keeping the following external parameters constant: 1. Different requirements are well understood by the different stakeholders involved in the usability process. 2. Different team members' technical competence is adequate for the job with minimum variance between the members. 3. Different team members' domain competence is adequate for the job with minimum variance between the members. This effort analysis is based on the following relevant estimation techniques as discussed and suggested by the experts and taken from the PMBOK [40]: 1. Three-point estimating (considering the best-case, most likely, and the worst-case estimates and combining in a beta-PERT distribution [41]). 2. Reserve estimating (with the contingency reserves for the risk of the known unknowns of the project). 3. Analogous estimating (utilizing the analogous measures for a similar set of activities). 4. Bottom-up estimating (using a work breakdown structure for the initial cost and change-scenario-based cost). Our effort analysis result utilizes PERT (program evaluation and review technique) [41] which employs the following types of time involved in the effort for a task based on the three-point estimation: 1. Optimistic time (o) is the time based on the ideal availability of resources, and their ideal productivity. It is the minimum possible time required to accomplish an activity or task, assuming all circumstances are better than normal. 2. Pessimistic time (p) is the maximum possible time required for accomplishing a task taking the worst-case scenario into perspective. 3. Most likely time (m) is based on the best-case scenario assuming all circumstances behave as normal. It is the best estimate or most likely amount of time required to accomplish a task. 4. Expected time (te) is the estimated time for accomplishing an activity or task, taking into consideration that normally all circumstances do not fall in line as expected. Therefore, the expected time te can be a weighted average of time with the most likely time getting four times the weight in comparison to the optimistic and the pessimistic time. Assuming that there are n activities in a task, the total time estimated for a task shall be a summation of the expected times of the individual activities. This effort analysis is based on the initial effort and the effort for the two change scenarios selected for the analysis. The detailed work breakdown structure used for estimating effort is presented in Section 5.3 and the estimates are presented in Table 3. Based on their expertise, familiarity with the use case, and the domain, six leading domain and IoT experts were selected as participants for this study. This list also contains a couple of program managers, with expertise of effort estimation for such large-scale systems. The profiles of these experts are briefly summarized below: Threats to Validity We undertook several measures to mitigate the various threats to the validity of our approach. Such measures and the limitations of the study are briefly discussed below. Interviews: Interviews conducted with different experts from one department could introduce bias. Therefore, the participants were chosen from different teams and departments. As the lead author is employed at a company from which the experts were chosen, extra precautions were taken to ensure no conflicts of interest (e.g., by not involving experts who report directly or indirectly to the lead author). Proper care was taken while conducting the series of interviews to let the participants provide their impartial feedback. These interviews varied from an hour to two for each session to allow ample time to explain the perspective and gather feedback. Several follow-up sessions were conducted for the queries from the experts. However, as all experts are from the same company and domain, perhaps the results indicate a Telecommunications vendor's perspective. Workshops: We held one-on-one structured workshops with the experts for the architectural design and evaluation. We made a detailed presentation of the material to each participant and collected their critique and improvement suggestions. For each workshop, the feedback was analyzed and incorporated into the study. Furthermore, the updates were discussed with the workshop participants in follow-ups. Workshops with groups of experts could have led to richer discussions and insights. However, it was practically difficult to book these experts for the same time slot. Effort estimation: We consulted two project managers for estimating the development and maintenance effort for the candidate architectures. Both managers independently estimated the effort for the same tasks. This was conducted to increase the reliability of the estimates. We used PERT as recommended by PMBOK. In addition, the experts are familiar with the method and use it for effort estimation for their regular work tasks as project managers. With the help of the experts, we also developed a detailed work breakdown structure to assist the task of estimation. We contend that detailed WBS and relying on an estimation method that the practitioners already use helps provide realistic estimates. Furthermore, PERT gives more significant weightage to the average values and thus reduces the outliers' influence. When estimating the effort, several parameters were considered constant, e.g., that the requirements are well understood by the different stakeholders involved in the development and that adequate technical competence and domain knowledge is available during the project. These assumptions (although likely to be violated in practice) are necessary to derive an estimate. Even with these limitations introduced due to the simplifying assumptions and the inaccuracy of the expert judgment-based effort estimates [42], we think that it is sufficient for a relative comparison. Architectural design: Through the use of the Zachman framework, Hofmeister et al.'s model [28], and SAAM, we have used a systematic approach to architecture design in this study. The approach allowed us to identify the use case requirements, identify a subset of architecturally relevant scenarios, and develop and evaluate candidate solutions that can meet these requirements. However, the design decisions in the architecture are heavily influenced by the knowledge and experience of the experts. No systematic endeavor to consider multiple architectural styles and patterns reported in the literature was undertaken. This is not a considerable limitation of the study as the experts involved in the study are leading domain and architecture experts in the industry. C1-Use Case The scenarios for this study are based on the online gaming use case as depicted in Figure 2. It has been evolved and enhanced based on interviews (as mentioned in the Methodology Section 3) with the leading domain experts and introducing the multiple service provider model. Making such an industry lab use case available to the research community and its evolution is the foremost contribution of this research artifact. Following are the major contributions towards the evolution of the existing gaming use case to its current form based on the brainstorming sessions and workshops. 1. The multi-stakeholder view that goes beyond the current perspective of one communication service provider and one gaming company providing all the requirements of the dedicated IoT network for the gaming use case. 2. A gaming user cannot be bound to only one communication service provider or only one communication medium for playing the games. 3. There can be multiple communication service providers in a region or country and the gaming user should be able to play the game (with high speed) irrespective of the communication service provider as a gaming user has a contract with the gaming company and not them for the game. 4. The gaming company should have a separate contract with the different communication providers for a network slice with high bandwidth for its games. 5. Besides, a gaming company may require several other stakeholders to bring their contents, sensors, and devices into the gaming ecosystem. 6. There could be various other services, such as a billing-as-a-service which could be provided by one of the providers in the system. 7. Autonomic network management would be a significant requirement for the quick assimilation of all the different stakeholders to inter-work together as well as automatic network changes based on the different contexts of the gaming user and other stakeholders in the system. 8. Efficient monitoring of a large number of different stakeholders should also be a requirement in the evolving nature of the stakeholders and the frequent context changes taking place in the system. As can be seen in Figure 2a, the provider comprises the Gaming Inc. application provider, sensor, device, and content provider. They keep sensitive customer data in their own cloud application and deploy only the core gaming application over the edge application platform provided by the service provider owing to security and confidentiality reasons. There can be more than one component (service provider) as depicted in Figure 2. Gaming Inc. buys a private network slice from the service provider(s), identified by S-NSSAI (single network slice selection assistance information) for a guaranteed high speed gaming experience as depicted in Figure 2a. The corresponding distributed architecture view in Figure 2b, depicts the similar multi-stakeholder view on the right. In the middle of Figure 2b, it depicts the multi-stakeholder view from the communication service provider and also different parts of the network, such that the multi "AAA agent" executing in a distributed manner on the edge of the communication network takes care of the requirements dynamically in the complete dedicated IoT network. This distributed "AAA agent" architecture enables a smart and adaptive environment which brings in some smart and adaptive features such as the following: 1. Assimilation of the different stakeholders in the evolving system. 2. Run-time-properties based adaptations, such as customer usage and his/her heart rate. This multi "AAA agent" architecture also depicts an autonomic network management architecture, that intelligently adapts to the contexts and stakeholders in the network and smartly adapts the processing and its location based on the requirement. An efficient monitoring is also an important aspect in this network. Billing as a service and billing on behalf can be services provided from one of the service providers in the network catering to the charging and billing of the Gaming Inc. Although the customer gets the bill with the Gaming Inc branding and pays the bill to the Gaming Inc, the system utilizes the services of the centralized billing as a service and billing on behalf of the provider for its billing requirements. Therefore, application providers focus on their expertise and offering and conduct charging and billing by the service provider. The different sensors and devices may also be provided by a different provider to the Gaming Inc. The customer on the top of Figure 2a has one contract with the communication service provider for the communication services such as data, voice, and messaging. The customer has a separate contract with Gaming Inc. for the gaming app and pays to them for the premium gaming services, devices, sensors, and experience. Key high-level requirements from such a dedicated IoT network, identified from the whitepapers and refined based on interviews and discussions with the domain and IoT experts are listed below verbatim: 1. The dedicated IoT network should provide a seamless gaming experience across different partners (communication service providers), channels (cellular network, Wi-Fi, Wired LAN), and access methodologies (such as 5G-NR, 4G LTE-EPC, Wi-Fi bands) in the enterprise IoT network ecosystem. 2. Secure connectivity across the IoT network (same security policy across different partners, access channels, and methodologies). 3. Convergent and holistic view of the ecosystem to the different stakeholders in the dedicated IoT network. 4. Game is free of charge to the customer. Gaming Inc charges the customer for features (high speed gaming over dedicated network slice), devices, sensors, and characters (avatars). 5. Gaming customer activity-/inactivity-based behavior for security and session management. 6. Customer usage pattern-based dynamic and enhanced authentication, authorization, and accounting (on Gaming Inc., edge or device) for catering to the different requirements in the dedicated IoT network as described earlier using the Zachman framework. 7. Content provider provides premium media content including famous proprietary profiles, avatars and their related video for the game. 8. Content provider charges Gaming Inc for the premium content as accessed by its subscribers. 9. Seamless integration of new stakeholders and enterprise in the dedicated IoT network. 10. Gaming Inc to retain customer sensitive data on its own server and not on the edge cloud provided by the service provider. 11. Sensitive information to be passed as range or state as required for the edge computing rather than the sensitive value itself. 12. Gaming Inc to have network slice with multiple communication/internet service providers, and agreements for gaming experience and charging and billing as well. 13. Convergent billing and billing as a service for the different stakeholders in the system. Eight selected scenarios and change scenarios as discussed and developed with the various domain and IoT experts have been mentioned in Table 2. These scenarios have been written in terms of the Zachman framework. C2-Architectures As a baseline for the existing systems, a multi-access edge computing (MEC) [43][44][45] and software defined network has been studied and discussed. Further based on the requirements of the use case and its scenarios two candidate architectures were prepared, viz. "smart AAA agent", and "static AAA agent". Both these architectures support a distributed, multi-agent architecture, that can intelligently take care of the requirements of the dedicated IoT network with computing being performed over the edge or at the server based on the requirement, policy, or context of execution, as depicted in Figure 2. Both of these systems and architectures derive inspiration from our previous research [8], and utilize the Zachman framework for converting the requirements into a set of authentication, authorisation, and accounting statements. They also support distributed data store and caching amongst the various agents for taking care of the sensitive data related handling to meet the specific requirements of raw data being available or not at a node and various other policy requirements. Both the architectures are quite similar except for the top two layers. The "knowledge processing layer" in both the architectures is the seat of intelligence and has an engine which governs all other components in a layer above and below. Following is a brief explanation of the two proposed architectures: Smart AAA Agent Architecture The smart AAA agent architecture as depicted in Figure 3a has its smart and adaptive intelligence in the top two layers viz. "Knowledge Base and Presentation Layer" and the "Knowledge Processing Layer". For the sake of a proof of concept and exemplification, the architecture python knowledge engine (PyKE) has been used for the artificial intelligence in the system [46,47]. PyKE uses fact-bases and rule-bases as part of the knowledge base in the system with the expert system engine processes for bringing in the intelligence in the system. This system employs a decision tree as part of the logical component to resolve all the existing requirements and policies of the system as introduced by the different stakeholders in the dedicated IoT network. The "IoT network requirement knowledge base" is responsible for maintaining the complete knowledge base comprising of the evolving requirements of the dedicated IoT network. The "stakeholder knowledge base" is responsible for maintaining the evolving stakeholder ecosystem and their individual policies. "Presentation knowledge base" is responsible for maintaining the knowledge base for the different presentations required in the dedicated IoT network. The "knowledge processing engine" is the heart of the system and it interacts with all other components in an engine and is also responsible for the coordination between the different agents. It is this component which brings about the smart and adaptive environment by intelligently allowing the assimilation of stakeholder and requirements on the go. It also supports the learning-and context-based execution and processing and supports autonomic network management. This smart and adaptive engine has the functionality to automatically assimilate the evolution and many stakeholders in the system so that they can work together. The introduction or removal of any stakeholder is processed keeping the complete system in perspective, thus smartly adapting the environment for it. Based on the context of the gaming user as well as other different stakeholders in the network, autonomic network management is accomplished in the system by smart and automatic distribution of the processing, AAA, data storage, and caching in the system based on the knowledge base defined for efficient management of the system. For example, suppose there is a policy as part of the system to store sensitive data only on the secure central server. In that case, a cache-based category is propagated on the edge automatically for quicker processing. On the other hand, without such restriction, such data could be processed at the edge for the most optimum management of resources and providing low latency. The knowledge base in this component is also responsible for the efficient monitoring of the different gaming users and the large number of stakeholders in the system in an efficient manner. Based on the policies and rules, the monitoring could be conducted on the edge or the server in the most efficient manner. Within this component, the fact bases from the different stakeholders are assimilated into the "master knowledge base" using a backward chaining mechanism keeping into perspective the overall requirements from the dedicated IoT network. Further forward chaining is employed over the master base for ascertaining the dynamic AAA entries for taking care of the different requirements in the dedicated IoT network. Furthermore, to introduce learning into the system a clustering based anomaly detection component ("learning system") is introduced that learns from the usage pattern in the network and clusters into safe and unsafe time for different levels of security as per the requirement. This learning system just takes care of the requirements as identified during this research. However, the same can be enhanced with a new algorithm for any other learning required in the system, such as semi-supervised learning of the context parameters of a user based on the clustered users and reinforcement learning based on positive and negative feedback to the system. The "distributed data cache and sync" is responsible for maintaining the cache of data between the relevant agents in the different parts of the dedicated IoT network. The remaining components of the system are responsible for providing the service assurance to the different stakeholders via the network and device adapters. The sequence diagram depicted in Figure 4 represents a sequential series of steps performed in the example scenario depicting autonomic network management. In case of a new requirement of using the heart rate of the gaming user as a contextual parameter for the game, the IoT network knowledge base is updated. A new stakeholder for a heart rate monitoring device/sensor and its knowledge base is added to the "stakeholder knowledge base". The perspective-based presentation of this heart rate to other stakeholders and view of the system to the stakeholder bringing in the heart rate measurement is added to *presentation knowledge base". The "knowledge processing engine" has a subscription for any change to the knowledge bases and gets the corresponding update. This heart rate may be sensitive data being a health parameter of the gaming user, so it needs to be saved only on a central and secure server, with no unauthorised access. Thus the knowledge processing engine shall update the "master knowledge base" to have categorization performed on the heart rate values in the secure central server and pass only the category to the edge server near the gaming user for quick processing of the game and reducing latency in the system. Therefore, the policy of encrypted storage of sensitive parameters in the system only on the central and secure server is honoured. Yet, low latency gaming is also facilitated in the smart and adaptive environment. The knowledge processing engine, further based on these updates, generates dynamic AAA using the forward chaining mechanism and updates the "Dynamic AAA" component, which it additionally sends to the adaptation and other layers for its realization over the network and the devices. Based on the context, the "distributed data cache and sync" is also updated and data propagated. Similarly, efficient monitoring of the different stakeholders and their contextual parameters during the run time would be optimized based on the different policies and requirements in the system. Figure 5 depicts the interaction view of the main components in the top two layers of the smart AAA agent. The leftmost "Interface" segment has three different requirements of interfacing with the admin of the system for the knowledge bases. The same three knowledge bases are the ones that construct the domain knowledge base. It starts with the first component of "IoT network requirement knowledge base", where all the requirements of the dedicated IoT network are acquired, defined and put together. The second component "Stakeholder knowledge base" is responsible for taking the requirements of all the stakeholders together catering to all the requirements in the dedicated IoT network and their own knowledge bases of requirements and policies. This component shall have both the knowledge base corresponding to the ontology of different stakeholders and also their own policies and requirements. The third component "Presentation knowledge base" takes in the knowledge bases corresponding to the different presentation views in the system for the different stakeholders. This has to be defined in such a way that each stakeholder gets to see the correct information that they are allowed to see in a specific context. The second vertical segment "Domain" is what comprises of all the requirements, its fulfilling stakeholders and their corresponding contextual view of the system. This comprises of the realisation of the system. The third vertical segment "Knowledge base" is responsible for maintaining the complete repository of knowledge bases in the system in the format that is accessible to the "Knowledge processing engine". There is a subscribe-publish pattern, which the "Knowledge processing engine" employs on the different knowledge bases to fetch, process and get any updates. This segment also contains a "Master knowledge base", which is the postprocessing knowledge base created for the system by the "Knowledge processing engine". The fourth vertical segment "Processing" has already been mentioned when we discussed the "Knowledge processing engine" which is the focal point of the system. It is this component that brings in the intelligence to the system. It also has a "Learning system" connected to the "Knowledge processing engine", which caters to the various learning requirements in the system based on the inputs received from the different stakeholders and their context in the domain. The fifth vertical segment "Data" has the two data stores. The "Dynamic AAA" contains the master knowledge base requirements translated into an enhanced AAA model based on the Zachman framework as explained in the paper earlier and exemplified in Tables 1 and 2. The "Distributed data cache & sync" component is updated by the "Knowledge processing engine" for the distributed data that needs to be on this agent and also any cache to be maintained over the agent in the system. Interface Domain Static AAA Agent Architecture The static AAA agent architecture as depicted in Figure 3b contains static rules for the various requirements in the dedicated IoT network. For bringing in the smart and adaptive element in such an environment, an integration is performed on a learning system to enable it to make decisions based on the different contexts. The static AAA agent architecture as depicted in Figure 3b also has its smart and adaptive intelligence in the top two layers viz. "Knowledge Base and Presentation Layer" and the "Knowledge Processing Layer". For the sake of a proof of concept, exemplifying the architecture, and comparison, the python knowledge engine (PyKE) has also been used in this system and architecture. This system also employs a decision tree as part of the logical component to resolve all the existing requirements and policies of the system as introduced by the different stakeholders in the dedicated IoT network. However, this does not employ the backward and forward chaining mechanisms for the intelligent assimilation of the requirements, stakeholders, and generation of the master knowledge base. Instead, it requires manual assimilation and creation of all the requirements and stakeholders in the system by experts. With large number of stakeholders and requirements this may be a herculean manual task. Furthermore, this would require redrawing the complete knowledge base with each change in the stakeholder or the requirements. Therefore, this system has a very restricted autonomic network management aspect of only altering the processing and data storage and retrieval based on predefined rules. It does not support the automatic assimilation of a new stakeholder into the system. Instead any addition or removal of a stakeholder needs to be handled with costly manual reprogramming of the master list of complete requirements. However, this system and architecture also supports a distributed architecture and can take care of the requirements of the dedicated IoT network by employing the Zachman framework's enhanced model of AAA as depicted in the model of the "smart AAA agent". The "master list of complete requirements" is responsible for maintaining the knowledge fact and rule base as created manually by the experts taking all the requirements and stakeholders into perspective. "Presentation knowledge base" is responsible for maintaining the knowledge fact and rule base for the different presentations required in the dedicated IoT network in this system as well. The "static rule-base engine" is analogous with the "knowledge processing engine" in the other architecture. However, it lacks the intelligence for automatic assimilation of the requirements and stakeholders in the system. However, it still supports autonomic network management, albeit with the rule and fact bases being static and this system needing an integration with an external learning and intelligence system for the learning requirements. The "rule base for requirements to AAA" is utilized by the "static rule-based engine" to generate the AAA for the complete system analogous to the other architecture. The rest of the components behave in the same manner as in the other architecture. Although, this system does not have intelligence of its own it is lightweight and can be integrated with any other intelligent system. C3-Evaluation Results As part of the scenario-based software architecture analysis the eight scenarios in Table 2 have been classified into six groups and evaluation assimilated from all the six experts as part of this study. The two proposed candidate architectures and systems of "smart AAA agent" and "static rule-based AAA agent" have been evaluated against the change scenario groups along with the existing network architecture comprising of multiaccess edge computing (MEC) [48] and network slicing [49] technologies as defined in the 3rd generation partnership project (3GPP) standards. The participants are well versed with the existing network architecture. Therefore, an emphasis was given to explaining and discussing the proposed candidate architectures. Feedback from each participant has been recorded and a discussion and consensus was created amongst them in terms of the scenario-based software architecture analysis for all three systems. This section presents the majority consensus feedback from all participants involved in this study. Table 4 summarizes the evaluation results. As part of the evaluation, the experts also mentioned that although a painstaking effort is required for the creation of the exhaustive knowledge base for the smart AAA agent, it helps tremendously in making the system intelligent, agile, and adaptive to any changes. In addition, a distributed system of agents is recommended, with a caching mechanism in agents at the edge to reduce any latency issues of communication in the network. The existing network architecture does not comply to the requirements. Therefore, further analysis takes into consideration only the two proposed candidate architectures. The following is a high-level outline of the work breakdown structure (WBS) used for effort estimation: • WBS for the Smart AAA agent (initial effort): Definition of various policies and relevant requirements for the system and its stakeholders, a task that involves Business Analysts (BA), Product Managers (PM), System Managers (SM), and System Engineers (SE). It entails: (i) BAs capturing business opportunity through workshops, and reconciling the requirements for the complete IoT network, (ii) PMs in consultations with BAs creating requirements based on the business opportunity, (iii) SMs and SEs defining and creating the knowledge-fact bases and knowledge rule base for the system with its forward and backward chaining mechanism, and its translation to technical requirements for the system, and (iv) SEs configuring the system for the corresponding knowledge bases and using test automation to secure future changes. • WBS for the Static rule-based AAA agent (initial effort): Understanding the various policies and other relevant requirements for the system and its stakeholders. This involves similar activities as in the case of the smart AAA agent, however, for PMs, requirements design may change as the system now requires more elaborated parameters which need to be specified in the requirements. This activity can be confined to the current set of requirements and not all the policies need to be modelled into the system. It entails: (i) The same tasks for BAs as in the case of the smart AAA agent, (ii) PMs in consultation with BAs create requirements based on the business opportunity, (iii) PMs and SMs deliberate in detail the various AAA requirements from the system in the context of the above step analysis. They need to understand the policies and requirements from each stakeholder and how they fit into the larger ecosystem and create one large set of AAA requirements for the system based on the immediate needs from each of the system stakeholders, (iv) SEs configure the AAA in the system based on the analysis and inputs from the PMs and SMs as mentioned in the step above, integrate with a learning system for the classification and clustering of usage patterns, and ensure project integrations with the product base for maintainability and upgrades. To estimate the maintenance cost of the candidate architectures, the following two indicative scenarios were chosen: Change scenarios 1. A new sensitive piece of information such as heart rate value is now required to be acquired from the gaming user for a new feature. 2. A new service provider is introduced into the dedicated IoT network ecosystem. WBS for the Smart AAA agent (change scenario effort): 1. BAs and PMs to understand the policies and requirements just for a new stakeholder or a policy and enlist them. It entails for Scenario 1: BA and PM introduce heart rate value in the system, and for Scenario 2: BA and PM reconcile the change with the IoT ecosystem. 2. System manager and engineer to create/update the knowledge base for the delta/ change requirement. For Scenario 1: SMs incorporate the heart rate fetching feature, and SEs implement the necessary configuration and automation, and for Scenario 2: SMs introduce a new service provider and SEs implement the necessary changes in the ecosystem and automation. 3. Addition of the new/updated knowledge base in the system. For Scenario 1: system enrichment of knowledge base, and for Scenario 2: system enrichment of knowledge base. WBS for the Static rule-based AAA agent (change scenario effort): 1. BAs and PMs to understand the policies and requirements for the new stakeholder or a policy and look at the context of the whole system and remodel the whole system. It entails for Scenario 1: BAs and PMs elaborate heart rate value and identify sensitive categorization, and for Scenario 2: BA and PM reconcile requirements with the ecosystem for which it is introduced. 2. SMs and SEs remodel the AAA for the whole system. For Scenario 1: SMs reconcile with GDPR compliance and translation of sensitive data to configuration requirements and SEs introduce necessary configuration and automation, and for Scenario 2: SMs and PMs complete the new service provider requirement's technical translation and SEs implement the necessary configuration and automation. 3. Reconfigure the AAA for the whole system. For Scenario 1: Change as new system configuration, and for Scenario 2: Reconfiguration with the whole system 4. Integration and reconfiguration for the learning systems for both the scenarios Table 3 presents the results of the effort estimation analysis in man days for the initial setup and the two change scenarios of both the systems, viz. "smart AAA agent" and "static AAA agent". Discussion and Updated Architecture The scenario-based software architecture analysis results make it evident that the diverse requirements from the different stakeholders as part of a dedicated IoT network and its dynamically evolving nature cannot be supported with the existing telecommunications systems out of the box. The two proposed candidate systems, i.e., smart AAA agent and the static AAA agent fulfill the requirements to different degrees, with the smart AAA agent taking care of almost all the change scenarios as discussed and analyzed in the study. As is evident in Figure 6, as well as in the detailed results of the effort analysis, the effort required for initial setup is similar for both the proposed candidate system and architectures. The smart AAA agent requires considerable effort for the one time creation of knowledge bases for all possible policies and requirements of the dedicated IoT network. However, the artificial intelligence in the system helps to assimilate them. Whereas the static AAA agent only needs to look at the current requirements from the system but needs an elaborate manual work of planning the AAA for the complete system. However, with each change scenario, the effort for assimilating that into the ecosystem is small in the smart AAA agent in comparison to the static AAA agent, as is evident in the difference of effort for the change scenario 1 and 2 respectively shown in Figure 6. Therefore, for a simple system with less complexity, with few stakeholders involved, and less likelihood of change scenarios, a static AAA agent is equally as good as the smart AAA agent as it is much simpler in its architecture. However, for larger systems with multiple involved stakeholders and more dynamic requirements, the smart AAA agent is a clear winner. The proposed candidate architecture for the smart AAA agent has been revised and updated based on the scenario-based software architecture analysis and feedback from the industry experts. The different scenarios and the distributed architecture for the smart AAA agent bring along a requirement of maintaining a cache of information at the different agents operating in the multi-agent ecosystem. Even for security purposes, it is deemed necessary to not store sensitive information at the edge. However, this information may be required in a specialized format such as a range or state and not the actual sensitive value for the execution in the dedicated IoT network. Another suggestion was containerization of the data for segregating the different stakeholders based on the access rights. Therefore, a similar addition to the architecture was conducted and is presented in Figure 7 for any future reference and use. Conclusions This research study has introduced a novel reference architecture and system in "smart AAA agent" for taking care of the dedicated IoT network requirements in a smart and adaptive environment. The study begins with interviews, analysis, evolution, and presentation of an industry use case for online gaming as a representative use case for the enhanced mobile broadband spectrum. We identified three new major requirements: the need to support multiple service providers and to enable billing as a service, and billing on behalf of the use case. The candidate reference architecture along with an alternative architecture and system were created, presented and evaluated with leading industry IoT and telecommunications domain experts. We found the Zachman framework very useful to describe the enterprise level requirements for a system. Furthermore, we found that the general model by [28] provides a systematic and streamlined approach for architectural design in industrial settings. Several relevant scenarios have been discussed and a scenario-based software architecture analysis was performed evaluating the new smart AAA agent alongside an alternate static AAA agent and the existing telecommunication systems, thus identifying the smart AAA agent with its adaptive and intelligent capabilities as the most suitable architecture for the given use case and its scenarios. A detailed analysis has been performed with the experts for the two proposed candidate architectures and an evaluation was performed for their usability under different circumstances. The smart AAA agent stands out as a better fit for the scenarios in contention and where there are a large number of stakeholders involved and the requirements and relationships are changing dynamically, whereas, the static AAA agent provides a lightweight system, which is good for smaller systems with more clearly defined initial requirements and lesser change scenarios later, it requires far more effort for any change scenario and introducing dynamicity to the system. In future we can enhance the smart AAA agent with a reinforcement learning model, which can train the system quickly to derive logical decisions on its own. Furthermore, the two proposed candidate systems also need to be studied from a performance aspect while executing at different locations. This can help evolve the architecture and also provide a reference system for industry and academia for future developments. Author Contributions: S.P.S.: Conceptualization, methodology, investigation, formal analysis, data curation, writing-original draft preparation, writing -review and editing, visualization, project administration; N.B.A.: Conceptualization, methodology, formal analysis, supervision, writingoriginal draft preparation, writing-review and editing; L.L.: Conceptualization, methodology, formal analysis, supervision, writing-review and editing. All authors have read and agreed to the published version of the manuscript. Funding: This work has been supported by a research grant for the PLENG project (20170213) and the VITS project (20180127) by the Knowledge Foundation (KKS) in Sweden. Nauman has received additional support from the VITS project (20180127) funded by KKS and by ELLIIT, a strategic area within IT and Mobile Communications, funded by the Swedish Government. Conflicts of Interest: The first author Shailesh Pratap Singh is employed at Ericsson. However, the authors declare that they have no known competing financial interest or personal relationships that could have appeared to influence the work reported in this paper. Abbreviations The following abbreviations are used in this manuscript:
13,531
2022-04-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
SHIFTING FROM AUTOGRAPH TO SELFIE: CONSUMER SELF IN THE FACEBOOK WORLD The recent revolution of information technology has diffused Facebook widely among consumers, and it has a considerable impact on consumer self and modern marketing communication. Purpose of this study is to investigate the acceptance and impact of dematerialized Facebook possessions on consumer extended self. The theoretical foundation of Technology Acceptance Model (TAM) and extended self were used to develop the hypotheses of the study. The current study is based on primary data collected through a self-administered questionnaire, among a sample of 327 Sri Lankan undergraduates. Partial Least Squares Structural Equation Modeling (PLSSEM) was used to estimate the path coefficients and test the hypotheses developed in the study. Findings reconfirmed the original TAM relationships, enabling the use of TAM in identifying the predictors for accepting dematerialized Facebook possessions. Moreover, findings revealed that dematerialized Facebook possession usage extended consumer self. Theoretical and practical implications of these study and directions for further research are discussed. Introduction The current trend of Facebook has become a part of consumers' everyday life. For instance, worldwide active Internet users are 4.338 billion (Kemp, 2019), and daily active Facebook users are 1.79 billion (Facebook, 2020). This wave creates ample ofopportunities for the business community.However, "digital technologies have not only created potent new social networks but also dramatically altered how culture works" (Holt ,2016, p. 42). Therefore, it is essential to understand the consumer culture in the realm of Facebook to get a conductive outcome from Facebook based marketing campaigns. Consumers have different self-images such as actual self (how consumers see themselves), ideal self (how consumers would like to see themselves) and the extended self. Sivadas and Machleit (1994, p. 143) defined the extended self as the "contribution of possessions to individual identity". According to Belk (1988, p. 139) "knowingly or unknowingly, intentionally or unintentionally, we regard our possessions as parts of our selves". However, the current digital revolution is altering consumer behavior, and it has considerable implications on the development of the consumer extended self (Belk, 2013). When Belk (1988) presented the extended self-concept, people were using personal computers, wherethere were no other digital products such as web pages, online games, search engines, social media etc. (Belk, 2013). Belk (2016) discussed some digital modifications of the extended selfand one such modification was dematerialization, which means possessions are no longer material. In the digital age, tangible things such as written communications, recorded music, photos, videos are disappearing in front of our eyes (Belk, 2013(Belk, , 2016, and it is a sensitive and complex issue (Magaudda, 2011).In the pre-digitalworld, consumers used material possessions to reflect their identities. Since possessions have become dematerialized in the Facebook world, it is a timely need to clarify whether these dematerialized possessions can extend consumer self. Consumer intrinsic values differ in predicting general Facebook usage and specific feature usage (timeline, wall, number of friends etc.) (Wijesundara &Xixiang ,2017;Smock, Ellison, Lampe, & Wohn ,2011). Further, specific features are possessions to its users (Belk, 2013;Watkins & Molesworth, 2012). As such, thestudy considers specific Facebook feature usage as dematerialized possessions usage. Since Facebook is a new technology, acceptance of dematerialized Facebook possessions can be understood from the technology acceptance perspective. Thus, current study utilizes the theoretical perspective of Technology Acceptance Model (TAM) in finding answers to the first research question while extended self theory was used as the theoretical foundation to support the second research question. Findings of this study contribute to both academia and industry by understanding the consumer self-concept in Facebook world. First, evidence from past studies has so far little highlighted extended self concept in the digital world (Belk, 2013;Lehdonvirta, 2012). As such, the current study contributes to filling the gap in the existing consumer behavior literature by identifying predictors to use dematerialized Facebook possessions and the impact of dematerialized Facebook possession usage on consumer extended self. Second, from the industry perspective, findings encourage both digital marketers and social networking sites companies to consider the importance of dematerialized possessions in creating consumer identities. Further, investors will be benefitted since the study provides useful insights into consumer behavior from the perspective of an emerging economy. The remainder of the paper is organized as follows. Section two is devoted to hypotheses development by focusing on the rational relationships among constructs underpinned by the literature. Section three explains the methods used in the study, followed by results with answers to the research questions. Validity, reliability and structural model evaluation are discussed in this section. Section five articulates a discussion, and finally, section six elaborates conclusion. Technology acceptance model (TAM) TAM is the mostly used theoretical model in explaining the acceptance of new technologies (Venkatesh, 2000), and was developed by Davis (1986) based on the Theory of Reasoned Action (TRA). According to TAM, Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) are the two basic antecedents of Attitudes toward Technology (A), Intention to Use Technology (IU) and finally, Actual Usage (AU) of Technology (Choi & Chung, 2013). Over time, the TAM model has been modified twice as TAM2 and TAM 3 (Venkatesh & Bala, 2008). Rauniar, Rawski, Yang, and Johnson ( 2014 )have developed the revised TAM for social media to explain the social media acceptance by incorporating new constructs to the model such as critical mass, capability, perceived playfulness and trustworthiness. Acceptance of dematerialized Facebook possessions The first three hypotheses were developed based on TAM in order to identify the antecedents to use dematerialize Facebook possessions. PEOU is the foremost factor in predicting PU in accepting personal computers (Igbaria, Zinatelli, Cragg, & Cavaye, 1997).Anumber of scholars have provided empirical evidence to support a positive relationship between PEOU and PU of technologies within different cultural contexts (Choi & Chung, 2013;Liu ,2010;Pinho &Soares, 2011;Qin, Kim, Hsu & Tan ,2011;Rauniar et al.,2014). Based on the associations proved in the TAM, coupled with the above empirical evidence, this study develops the following three hypotheses for Dematerialized Facebook Possessions Usage (DFPU). Extended self William and James (1910), one of the first authors who wrote about the self (Epstein, 1973)mentioned that we were sum of our possessions. According to William and James (1910), individuals body, psychic power and other belongings such as family, clothes, land house are part of his self (as cited in Belk, 1988). After that, a number of scholars contributed to the body of knowledge (Dixon & Street, 1975;Rochberg-Halton, 1984). However, Belk (1988) applied extended self in the field of consumer behavior (Cushing, 2012). Individuals acquire possessions in order to support the fragile sense of self since they are mainly what they have and possess (Tuan, 1980). This indicates that possessions play a vital part in a consumer's life by creating identities and extending their selves. According to Solomon (2012), extended self considers external objects as a part of individuals and described four levels of extended self as individual level (you are what you wear), family level (includes your house and furniture), community level (includes your neighborhood and hometown) and group level (includes your religion, flag, sports team, etc.). Dematerialized Facebook possessions usage and the extended self Technological development significantly modifies the extended self concept, which was originally presented in 1988 by Belk and allows consumers to extend the self as science fiction writers imagined 25 years ago (Sheth & Solomon, 2014). One of those modifications is the dematerialization of possessions in the digital world. Collections, pictures, letters, music, and greeting cards are transforming into dematerialized goods in the digital world (Siddiqui & Turley, 2006). Digital goods are playing a substantial part in consumers' everyday lives, not just supplanting material equivalents (e.g., eBooks, digital music) yet in addition presenting new forms of possession (e.g., social networking profiles, virtual possessions within videogames) (Watkins & Molesworth, 2012). Lehdonvirta (2012)suggests that digital goods owners consider they are very real to them. Denegri-Knott, Watkins, and Wood (2012)highlighted that individuals ritually convert virtual goods into meaningful properties. Further, Lehdonvirta, Wilska, and Johnson (2009)pointed out that digital goods play social roles as same as its material counterpart. Facebook is a collection of dematerialized possessions (profile, timeline, and friends etc.). These possessions have the features relating to the material self-extending possessions such as attachments, identity and fear of loss. Attachment means, "caring about, being fond of and being miserable if the object of our affect is absent" (Turner & Turner, 2013, p. 1).Regardless of immaterial nature and lack of legal ownership, individuals possess and form emotional attachments to digital goods (Watkins & Molesworth, 2012). A user becomes emotionally attached to the social media, and hence, it becomes a part of theuser's self-definition and representation in the digital world (Wang, Yeh, Yen, & Sandoya, 2016). Dalisay, Kushin, Yamamoto, Liu, and Buente (2016)mentioned that minority college students demonstrated moderate levels of emotional attachment to social media sites. Facebook group members request to put "likes" and "share" things happening in those groups, and some users send requests to join groups, which they are interested in. In addition, users comment and like their own photos on Facebook. All these behaviors represent the nature of the attachment to the dematerialized Facebook possessions. In psychology, self-identity means cognitive construct of the self that answers the question of who am I (Hogg, 2001). Identity comprises the unique characteristics communicated by a specific individual's presence (Dennen & Burner, 2017, p. 1). Social categories such as groups, relationships, and personal characteristics act as a part of self-identity, which support individuals to define themselves (Carter & Grover, 2015). In order to develop an online identity, one needs to be an active online user. Individuals who maintain a profile on online platforms, and share words, images, and preferences through these profiles are forming online identities (Dennen & Burner, 2017).Communication of online identity through social media helps the user to create both social presence and emotional presence (Bozkurt & Tu, 2016). Undergraduates independently develop a social media presence and communicate their identity to interact with peers (Dennen & Burner, 2017). Individuals now move from "you are what you wear" to "you are what you post (Sheth & Solomon, 2014, p. 126). This indicates that dematerialize Facebook possessions have an ability to create identities to its users. Fear of loss is also attached to dematerialized Facebook possessions. Mainly,Facebook users are using password protection method to protect Facebook account from unauthorized access. From the company side, Facebook uses various security methods to protect its users from hackers. These pieces of evidence suggest that dematerialized Facebook possessions have self-extending features. Further, there is a tendency to treat people as possessions and extensions of self, similar to the way tools are used (Belk, 1988). In Facebook, users have thousands of friends; some friends are second-order friends (friends of friends) or more than that, sometimes they have never met or not have any idea to meet (Clemons, 2009). This indicates that people resource onFacebookhelps users to extend their self.Additionally, Facebook has replaced many facets of the materialized possessions with numerous features day by day. The trend of selfie provides strong evidence for this transition. A decade ago, when people meet favorite celebrities, they used autograph or any other material to get the signature of the particular celebrity and showed it to others. Now autographs are being gradually replaced by selfie with celebrities, which are posted and commented on Facebook with likes and reacts. Tangible photo albums are also being saved as intangible albums on Facebook. Creative people wrote short poems in pieces of papers to express themselves and feelings (sadness, happiness) and now they appear in Facebook status. Timeline of Facebook plays the role of diary for the users. All these evidence reveal that dematerialized Facebook possessions can extend consumer self positively. Thus, this study proposes following hypothesis for dematerialized Facebook possessions usage and extended self. Data collection and participants The sample comprised of undergraduates from seven universities in Sri Lanka. Internet penetration was 47%, and social media penetration was 30% in January 2020 (Digital 2020: Sri Lanka). According to the Networked Readiness Index 2020, Sri Lanka is the top leading country in the South Asian region (rank 83), indicating as a feasible location for social media studies. Shaouf, Lü, and Li (2016)suggest thatuniversity students might be the most suitable sample for e-commerce related studies. In addition, the majority of the Sri Lankan Facebook users are in the age group of 18 to 34 (Amarasinghe, 2011), which can be represented by undergraduates. Convenience sampling technique was used to select the sample. Self -administered questionnaire with closed ended questions, was used to collect the data. Content validity was ensured by sound literature review and expert opinion (academics, industry researchers). A pilot survey was conducted with 10 students to test the questionnaire, and all the items were suitable for further proceeding. Thefirst section of the questionnaire includes questions adapted from previous studiesto measure the main constructs, including PEOU (Rauniar et al., 2014), PU (Choi & Chung, 2013;Rauniar et al., 2014), IU (Choi & Chung, 2013;Rauniar et al., 2014), Facebook Usage (Smock et al., 2011) and Extended self (Sivadas & Machleit ,1994). Respondents had to make their level of agreement in the scale from strongly agree to strongly disagree. Next section of the questionnaire consisted of demographic factors and information relating to Facebook usage. Total of 500 questionnaires were distributed among the students. From those questionnaires, 368 were received. However, 41 questionnaires were eliminated from the final analysis due to incomplete answers. Finally, 327 questionnaires were used in the analysis, indicating 65.4% response rate, which is at a satisfactory level (Baruch ,1999). Sample description The sample consisted of 45.25 % of male and 54.75% of female undergraduates. The majority (48.9%) of them have more than three year experience in Facebook (Table 1). Facebook usage is about six hours per week among most students (39.1%). Majority of the students (30.88%) have 251-500 Facebook friends. Reliability and validity PLS-SEM was used with Smart-PLS 3.2.7 software to test the hypotheses and estimate path models involving latent variables, which were observed through multiple indicators. PLS-SEM was selected since the goal of the study was to predicting key target constructs, and the study was an extension of an existing structural theory (Hair, Ringle, & Sarstedt ,2011).There was no critical issue in normality since all skewness and kurtosis values were between -1 and +1 (Sarstedt, Ringle, & Hair, 2017). As recommended by Hair , Hult, Ringle, and Sarstedt (2014), certain items were dropped from the analysis due to lower outer loadings. Table 2 illustrates the outer loadings and indicator reliability values for all selected items. Composite Reliability (CR) and Cronbach's alpha can be used to assess the internal consistency reliability.Both CR and Cronbach's alpha lie between 0 and 1, higher values represent higher levels of internal consistency reliability (Hair et al., 2014;Gliem & Gliem, 2003). All the reliability indicators are at a satisfactory level (Table 3).Convergent validity of a study can be ensured by average variance extracted (AVE) (Fornell & Larcker, 1981).AVE values greater than 0.5 can be considered as an adequate level of convergent validity (Bagozzi & Yi, 1988).There is no critical issue in convergent validity (Table 3). Fornell-Larcker criterion (1981) "compares the square root of the AVE values with the latent variable correlations" (Hair et al., 2014, p. 105), and used to assess the discriminant validity. Table No 4illustrates that thediscriminant validity is met in this research since the square root of AVE of each latent variable is greater than the latent variable correlations. Structural model evaluation and hypotheses testing The first step of the structural model evaluation is to test the multicollinearity among the independent variables. Sarstedt et al. (2017)suggestVIF values above five indicate collinearity among the predictor variables. According to the Smart-PLS report, all VIF values were below 0.5, indicating that there is no multicollinearity issue in the current study. Next step is to assess the significance and relevance of structural model relationships. Table 5 illustrates path coefficients of the structural model and significance testing results. All paths were statistically significant, supporting all four hypotheses. R2 value explains the variance explained of endogenous constructand lies between 0 to 1, and higher value point toward more predictive accuracy (Sarstedt et al., 2017). R ² values of 0.75, 0.50, or 0.25 for endogenous latent variables can be considered as substantial, moderate, or weak, respectively. However, R² results of 0.20 is considered high in some study disciplines, for instance in consumer behavior (Hair et al., 2011 p.145). Given that current study is about user behavior in Facebook context, the results indicate that model explains a substantial part of the variance in the endogenous variables PU, IU, ES with an average R2 of 0.244,0.139 and 0.170, respectively. However, for DFPU (R2= 0.139) was comparatively low (Table No 6). F20.02, 0.15 and 0.35, consider as small, medium, and large effects respectively (Sarstedt et al., 2017). Table No 6 illustrates that all F2 values are at a medium level. Q2 values above zero indicate the model's predictive accuracy (Sarstedt et al., 2017). As per Table No 6,Q2 values provide sufficient evidence to model's predictive accuracy. Discussion This study examines antecedents to use dematerialized Facebook possessions and the impact of dematerialized Facebook possessions on consumer extended self. First, the study assumed that PEOU of Facebook was positively related to PU of Facebook, which was accepted by empirical evidence. This finding is consistent with the previous studies (Liu, 2010;Qin et al., 2011;Rauniar et al., 2014). These cond Table 7 illustrates the summary of hypotheses testing results and Figure No 1depicts the final structural model. hypothesis was that PU of Facebook is positively related to IU Facebook. Empirical evidence provides sufficient evidence to accept this relationship by supporting the previous literature (Choi & Chung, 2013;Davis, 1989;Liu, 2010;Rauniar et al., 2014). Then, the study assumed IU Facebook was positively related to DFPU, which was accepted at 95% confidence level. This finding is in agreement with the previous studies related to actual technology usage (Liu, 2010;Rauniar et al., 2014). Fourth hypothesis, the originality of the current study was accepted by sufficient evidence indicating DFPU can extend consumer self positively. The current study contributes to social media and consumer behavior literature. This is the first study, which considers dematerialized Facebook possessions as a predictor for consumer extended self. Adapted indicators were validated through proper validity and reliability techniques, providing an opportunity to replicate them in future studies. This is a significant contribution to social media, consumer behavior theory and future studies. First, three hypotheses were related to TAM and study reconfirmed the original TAM relationships, enabling to use TAM in Facebook context. With the digital revolution, most material possessions (greeting cards, gifts, and picture memories) are transforming into digital goods. However, the question aroused was whether consumers accept these dematerialized possessions as self extending goods. This study contributes to the existing literature by providing an answer to this question. Findings suggest that dematerialized possessions can extend consumer self positively. Digital marketers, SNS companies and investors will be benefited from the current findings. According to Cushing (2012), as compared to the old consumers, younger consumers consider digital possessions as a part of their extended self. Thecurrent study confirms that young consumers consider dematerialized Facebook possessions have self-extending features, with a sample of undergraduates. These young consumers will become mass consumers in future and coming generation will be even closer to digital culture. As such, digital platform is essential to communicate with customers in future. Digital marketers can use the findings of this study to develop marketing strategies for their companies. In the offline world, consumers need to purchase certain products to reflect their identities. However, in the digital world, they have many options to show their commitment to brands such as online brand communities, fan pages etc. Even though consumers cannot buy some brands (selfextendingbrands), they can show others that they have an association with brands by joining to these brand communities or simply putting like to a page. Further, marketers can arrange more online gathering forums using Social Networking Sites (SNS). These online forums create a good platform for two-way communication. Moreover, marketers can introduce online brand symbols (gold color symbol, silver color symbol) based on consumer loyalty. Therefore, users can use these symbols to show their identities in online forums. In Facebook, consumers can tag brands in their profile pictures. Since selfies are becoming a popular trend, brand ambassadors (celebrities) can upload selfies with tagged brands. Hence, their fans will see those brands frequently and they will follow the same trend. Advertising income is one of the main revenue sources of many SNS companies and it depends on the number of members. Since dematerialized possessions can extend consumer self, SNS companies can introduce more attractive features (dematerialize possessions) in their SNS to attract more users. Study was conducted in an emerging economy. Thus, findings will help investors to make their investment decisions in such economies. For instance, they can invest more in digital goods, which can be sold in emerging economies since consumers consider dematerialized possessions as self-extending goods. Conclusion This study was designed to achieve mainly two research objectives. First, to identify antecedents to use dematerialized Facebook possessions. The theoretical foundation of TAM was used to accomplish this objective and found PEOU, PU and IU were the predictors for DFPU. As such, the original TAM relationship was reconfirmed in Facebook context. Second, the study endeavored to examine how dematerialized Facebook possessions influenced extended self. Findings confirm that dematerialized Facebook possessions will extend consumer self positively. These findings will contribute to consumer behavior and social media literature and allow practitioners to see consumer behavior patterns in the Facebook world from a different perspective. Although this study provides useful insights into consumer behavior and Facebook usage, the study is still subject to some limitations. Though Facebook is a global phenomenon, it is constrained by local conditions such as culture (Wijesundara, 2014). As such, future researchers can use this model to understand consumer extended self in other cultural settings. Further, there are many SNSs such as LinkedIn, Twitter, Wechat and Facebook. However, this study focused only on Facebook. In order to have a deeper understanding, it is better to study this model with other SNSs. In this study, only young adults were recruited as respondents, other demographic groups are also growing fast on Facebook. Future studies should take into account more age groups due to age can serve as an important factor in Facebook usage and extended self.
5,217.8
2020-06-30T00:00:00.000
[ "Business", "Computer Science" ]
Coherent Ising machines -- Quantum optics and neural network perspectives A coherent Ising machine (CIM) is a network of optical parametric oscillators (OPOs), in which the strongest collective mode of oscillation at well above threshold corresponds to an optimum solution of a given Ising problem. When a pump rate or network coupling rate is increased from below to above threshold, however, smallest eigenvectors of Ising coupling matrix [J_ij] appear near threshold and impede the machine to relax to true ground states. Two complementary approaches to attack this problem are described here. One approach is to utilize squeezed/anti-squeezed vacuum noise of OPOs below threshold to produce coherent spreading over numerous local minima via quantum noise correlation, which could enable the machine to access very good solutions above threshold. The other approach is to implement real-time error correction feedback loop so that the machine migrates from one local minimum to another during an explorative search for ground states. Finally, a set of qualitative analogies connecting the CIM and traditional computer science techniques are pointed out. In particular, belief propagation and survey propagation used in combinatorial optimization are touched upon. Introduction Recently, various heuristics and hardware platforms have been proposed and demonstrated to solve hard combinatorial or continuous optimization problems, in which the cost function to be minimized, such as Ising or XY Hamiltonian, is mapped to the energy landscape of classical spins, [1] [2] [3] quantum spins, [4] [5] solid state devices [6][7] [8] or neural networks. [9] [10] Convergence to a ground state is assured for a slow enough decrease of the temperature. [11] An alternative approach based on networks of optical parametric oscillators (OPOs) [12][13] [14][15] [16][17] [18] and Bose-Einstein condensates [19] [20] has been also actively pursued, in which the target function is mapped to a loss landscape. Intuitively, by increasing the gain of such an open-dissipative network with a slow enough speed by ramping an external pump source, a lowest-loss ground state is expected to emerge as a single oscillation/condensation mode. [13] [21] In practice, ramping the gain of such a system results in a complex series of bifurcations that that may guide or divert evolution towards optimal solution states. One of the unique theoretical advantages of the second approach, for instance in a coherent Ising machine (CIM), [12][13] [14][15] [16] is that quantum noise correlation formed among OPOs below oscillation threshold could in principle facilitate quantum parallel search across multiple regions of phase space. [22] Another unique advantage is that following the oscillation-threshold transition, exponential amplification of the amplitude of a selected ground state is realized in a relatively short time scale of the order of a photon lifetime. In a non-dissipative degenerate parametric oscillator, two stable states at above bifurcation point co-exist as a linear superposition state. [23] [24] On the other hand, the network of dissipative OPOs [13] [14][15] [16] [17] changes its character from a quantum analog device below threshold to a classical digital device above threshold. Such quantum-to-classical crossover behavior of CIM guarantees a robust classical output as a computational result, which is in sharp contrast to a standard quantum computer based on linear amplitude amplification realized by Grover algorithm and projective measurement. [25] A CIM based on coupled OPOs, however, has one serious drawback as an engine for solving combinatorial optimization problems: mapping of a cost function to the network loss landscape often fails due to the fundamentally analog nature of the constituent spins, i.e., the possibility for constituent OPOs to oscillate with unequal amplitudes. This problem is particularly serious for a frustrated spin model. The network may spontaneously find an excited state of the target Hamiltonian with lower effective loss than a true ground state by self-adjusting oscillator amplitudes. [13] An oscillator configuration with frustration and thus higher loss main retain only small probability amplitude, while an oscillator configuration with no frustration and thus smaller loss acquires a large probability amplitude. In this way, an excited state can achieve a smaller overall loss than a ground state. Recently, the use of an error detection and correction feedback loop has been proposed to suppress this amplitude heterogeneity problem. [26] The proposed system has a recurrent neural network configuration with asymmetric weights ( ≠ ) so that it is not a simple gradient-descent system any more. The new machine can escape from a local minimum by a diverging error correction field and migrate from one local minimum to another. The ground state can be identified during such a random exploration of the machine. In this letter, we present several complementary perspectives for this novel computing machine, which are based on diverse, interdisciplinary viewpoints spanning quantum optics, neural networks and message passing. Along the way we will touch upon connections between the CIM and foundational concepts spanning the fields of statistical physics, mathematics, and computer science, including dynamical systems theory, bifurcation theory, chaos, spin glasses, belief propagation and survey propagation. We hope the bridges we build in this article between such diverse fields will provide the inspiration for new directions of interdisciplinary research that can benefit from the crosspollination of ideas across multifaceted classical, quantum and neural approaches to combinatorial optimization. Optimization dynamics in continuous variable space CIM studies today could well be characterized as experimentally-driven computer science, much like contemporary deep learning research and in contrast to the current scenario of mainstream quantum computing. Large-scale measurement feedback coupling coherent Ising machine (MFB-CIM) prototypes constructed by NTT Basic Research Laboratories [15] are reaching intriguing levels of computational performance that, in a fundamental theoretical sense, we do not really understand. While we can thoroughly analyze some quantum-optical aspects of CIM component device behavior in the small size regime, [27] [28] [29] we lack a crisp understanding of how the physical dynamics of large CIMs relate to the computational complexity of combinatorial optimization. Promising experimental benchmarking results [30] are thus driving theoretical studies aimed at better elucidating fundamental operating principles of the CIM architecture and at enabling confident predictions of future scaling potential. We thus face complementary obstacles to those of mainstream quantum computing, in which we have long had theoretical analyses pointing to exponential speedups while even small-scale implementations have required sustained laboratory efforts over several decades. What is the effective search mechanism of large-scale CIM? Are quantum effects decisive for the performance of current and near-term MFB-CIM prototypes, and if not, could existing architectures and algorithms be generalized to realize quantum performance enhancements? Can we relate exponential gain (as understood from a quantum optics perspective) to features of the phase portraits of CIMs viewed as dynamical systems, and thereby rationalize its role in facilitating rapid evolution towards states with low Ising energy? Can we rationally design better strategies for varying the pump strength? Generally speaking, CIM may be viewed as an approach to mapping combinatorial (discrete variable) optimization problems into physical dynamics on a continuous variable space, in which the dynamics can furthermore be modulated to evolve/bifurcate the phase portrait during an individual optimization trajectory. The overarching problem of CIM algorithm design could thus be posed as choosing initial conditions for the phase-space variables together with a modulation scheme for the dynamics, such that we maximize the probability and minimize the time required to converge to states from which we can infer very good solutions to a combinatorial optimization problem instance encoded in parameters of the dynamics. While our initialization and modulation scheme obviously cannot require prior knowledge of what these very good solutions are, it should be admissible to consider strategies that depend upon inexpensive structural analyses of a given problem instance and/or real-time feedback during dynamic optimization. The structure of near-term-feasible CIM hardware places constraints on the practicable set of algorithms, while limits on our capacity to prove theorems about such complex dynamical scenarios generally restricts us to the development of heuristics rather than algorithms with performance guarantees. We may note in passing that in addition to lifting combinatorial problems into continuous variable spaces, analog physics-based engines such as CIMs generally also embed them in larger model spaces that can be traversed in real time. The canonical CIM algorithm implicitly transitions from a linear solver to a soft-spin Ising model, and a recently-developed generalized CIM algorithm with feedback control can access a regime of fixed-amplitude Ising dynamics as well. [26] Given the central role of the optical parametric amplifier (OPA) in the CIM architecture, it stands to reason that it could be possible to transition smoothly between XY-type and Ising-type models by adjusting hardware parameters that tune the OPA between non-degenerate and degenerate operation. [31] Analog physics-based engines thus motivate a broader study of relationships among the landscapes of Isingtype optimization problems with fixed coupling coefficients but different variable types, which could further help to inform the development of generalized CIM algorithms. The dynamics of a classical, noiseless CIM can be modeled using coupled ordinary differential equations (ODEs): where is the (quadrature) amplitude of the ℎ OPO mode (spin), are the coupling coefficients defining an Ising optimization problem of interest (here we will assume = 0), and is a gain-loss parameter corresponding to the difference between the CIM's parametric (OPA) gain and its round-trip (passive linear) optical losses. We note that similar equations appear in the neuroscience literature for modeling neural networks (e.g., [32]). In the absence of couplings among the spins ( → 0) each OPO mode independently exhibits a pitchfork bifurcation as the gain-loss parameter crosses through zero (increasing from negative to positive value), corresponding to the usual OPO "lasing" transition. With non-zero couplings however, the bifurcation set of the model is much more complicated. In the standard CIM algorithm the matrix is chosen to be (real) symmetric, although current hardware architectures would easily permit asymmetric implementations. With symmetric it is possible to view the overall CIM dynamics as gradient descent in a landscape determined jointly by the individual OPO terms and the Ising potential energy. Following recent practice in related fields, [32] [33] we may assess generic behavior of the above model for large problem size (large number of spins, ) by treating as a random matrix whose elements are drawn i.i.d. from a zero mean Ising spin glass model. [34] The origin = 0 is clearly a fixed point of the dynamics for all parameter values, and in the loss-dominated regime ( negative, and less than the smallest eigenvalue of matrix) it is the unique stable fixed point. Assuming is symmetric as implemented, the first bifurcation as is increased (pump power is increased) necessarily occurs as crosses the smallest eigenvalue of and results in destabilization of the origin, with a pair of new local minima emerging along positive and negative directions aligned with the eigenvector of corresponding to this lowest eigenvalue. If we assume that the CIM is initialized at the origin (all OPO modes in vacuum) and the pump is increased gradually from zero, we may expect the spin-amplitudes to adiabatically follow this bifurcation and thus take values such that the are proportional to the smallest eigenvector of just after crosses the smallest eigenvalue. The sign structure of this eigenvector is known to be a simple (although not necessarily very good) heuristic for a low-energy solution of the corresponding Ising optimization problem. For example, for the SK model, the spin configuration obtained from rounding the smallest eigenvector of is thought to have a 16% higher energy density (energy per spin) than that of the ground state spin configuration. [35] In the opposite regime of high pump amplitude, ≫ � �, we can infer the existence of a set of fixed points determined by the independent OPO dynamics (ignoring the terms) with each of the assuming one of three possible values �0, ±√ �. The leading-order effect of the coupling terms can then be considered perturbatively, leading to the conclusion [36] that the subset of fixed points without any zero values among the are local minima lying at squared-radius (distance from the origin) It follows that the global minimum spin configuration for the Ising problem instance encoded by can be inferred from the sign structure of the local minimum lying at greatest distance from the origin, and that very good solutions can similarly be inferred from local minima at large squaredradius. We may see in this some validation of the foundational physical intuition that in a network of OPOs coupled according to a set of coefficients, the "strongest" collective mode of oscillation should correspond somehow with an optimum solution of an Ising problem defined by these . A big picture thus emerges in which initialization at the origin (all OPOs in vacuum) and adiabatic increase of the pump amplitude induces a transition between a low-pump regime in which the spin-amplitudes assume a sign structure determined by the minimum eigenvector of , and a high-pump regime in which good Ising solutions are encoded in the sign structures of minima sitting at greatest distance from the origin. Apparently, complex things happen in the intermediate regime. Qualitatively speaking, the gradual increase of in the above equations of motion induces a sequence of bifurcations that modify the phase portrait in which the CIM state evolves. In simple cases, the state variables could follow an "adiabatic trajectory" that connects the origin (at zero pump amplitude) to a fixed point in the high-pump regime (asymptotic in large ) whose sign structure yields a heuristic solution to the Ising optimization. In general, one observes that such adiabatic trajectories include sign flips relative to the first-bifurcated state proportional to the smallest eigenvector of . In a non-negligible fraction of cases, as revealed by numerical characterization of the bifurcation set for randomly-generated with ~10 2 , the adiabatic trajectory starting from the origin is at some point interrupted by a subcritical bifurcation that destabilizes the local minimum being followed without creating any new local minima in the immediate neighborhood. (Indeed, some period of evolution along an unstable manifold would seem to be required for the observation of a lasing transition with exponential gain.) For such problem instances, a fiduciary evolution of the CIM state cannot be directly inferred from computation of fixed-point trajectories as a function of . Generally speaking, in the "near-threshold" regime with ~0 we may expect the CIM to exhibit "glassy" dynamics with pervasive marginally-stable local minima, and as a consequence the actual solution trajectory followed in a real experimental run could depend strongly on exogenous factors such as technical noise and instabilities. Hence it is not clear whether we should expect the type of adiabatic trajectory described above to occur commonly, in practice. Indeed, fluctuations could potentially induce accidental asymmetries in the implementation of the coupling term, which could in turn induce chaotic transients that significantly affect the optimization dynamics. We note that the existence of a chaotic phase has been predicted [32] on the basis of mean-field theory (in the sense of statistical mechanics) for a model similar to the CIM model considered here, but with a fully random coupling matrix without symmetry constraint. Characterization of the phase diagram for near-symmetric (nominally symmetric but with small asymmetric perturbations) seems feasible and is currently being studied. [37] It is tempting to ask whether a glassy phase portrait for the classical ODE model in the near-threshold regime could correspond in some way with non-classical behavior observed in full quantum simulations of ODL-CIM models near threshold, as reviewed in the next section. It seems natural to conjecture that quantum uncertainties associated with antisqueezing below threshold could induce coherent spreading over a glassy landscape with numerous marginal minima, with associated buildup of quantum correlation among spin-amplitudes. The above picture calls attention to a need to understand the topological nature of the phase portrait and its evolution as the pump amplitude, , is varied. Indeed, we may restate in some sense the abstract formulation of the CIM algorithm design problem: Can we find a strategy for modulating the CIM dynamics in a way that enables us to predict (without prior knowledge of actual solutions) how to initialize the spin-amplitudes such that they are guided into the basin of attraction of the largest-radius minimum in the high pump regime? Or into one of the basins of attraction of a class of acceptably large-radius minima (corresponding to very good solutions)? Of course, an additional auxiliary design goal will be to guide the CIM state evolution in such a way that the asymptotic sign structure is reached quickly. In the near/below-threshold regime, we may anticipate at least two general features of the phase portrait that could present obstacles to rapid equilibration. One would be the afore-mentioned prevalence of marginal local minima (having eigenvalues with very small or vanishing real part), but another would be a prevalence of low-index saddle points. Trajectories within either type of phase portrait could display intermittent dynamics that impede gradient-descent towards states of lower energy. Focusing on the below-threshold regime in which the Ising-interaction energy term may still dominate the phase portrait topology, we may infer from works such as [38] that for large with symmetric-random-Gaussian, fixed points lying well above the minimum energy should dominantly be saddles and there should be a strong correlation between the energy of a fixed point and its index (fraction of unstable eigenvalues). As a gradient-descent trajectory approaches phase space regions of lower and lower energy, results from [33] [38] suggest that the rate of descent could become limited by escape times from low-index saddles whose eigenvalues are not necessarily small, but whose local unstable manifold may have dimension small relative to . One wonders whether there might be CIM dynamical regimes in which the gradient-descent trajectory takes on the character of an "instanton cascade" that visits (neighborhoods of) a sequence of saddle points with decreasing index, [39] leading finally to a local minimum at low energy. If such dynamics actually occurs in relevant operating regimes for CIM, we may speculate as to whether the overall gradient descent process including stochastic driving terms (caused by classical-technical or quantum noise) could reasonably be abstracted as probability (or quantum probability-amplitude) flow on a graph. Here the nodes of the graph would represent fixed points and the edges would represent heteroclinic orbits, with the precise structure of the graph of course determined and . If the graph for a given problem-instance exhibits loops, we could ask whether interference effects might lead to different transport rates for quantum versus classical flows (as in quantum random walks [40] ). Such effects, if they exist, would Below threshold, each OPO pulse is in an anti-squeezed vacuum state which can be interpreted as a linear superposition (not statistical mixture) of generalized coordinate eigenstates, ∑ | ⟩, if the decoherence effect by linear cavity loss is neglected. In fact, quantum coherence between different | ⟩ eigenstates is very robust against small linear loss. [23] Figure 1(b) shows the quantum noise trajectory in 〈∆ � 〉 and 〈∆ � 〉 phase space. The uncertainty product stays close to the Heisenberg limit, with a very small excess factor of less than 30%, during an entire computation process, which suggests the purity of an OPO state is well maintained. [41] Therefore, the above mentioned positive/negative noise correlation between two OPO pulses depending on ferromagnetic/anti-ferromagnetic coupling, implements a sort of quantum parallel search. That is, if the two OPO pulses couple ferromagnetically, the formed positive quantum noise correlation prefers ferromagnetic phase states | ⟩ | ⟩ and If two OPO pulses couple antiferromagnetically, the formed negative quantum noise correlation prefers anti-ferromagnetic phase Entanglement and quantum discord between two OPO pulses can be computed to demonstrate such quantum noise correlations. [27][28] [29] Figure 1(c) and (d) show the degrees of entanglement and quantum discord versus normalized pump rate p for an optical delay line coupled coherent Ising machine (ODL-CIM) with N = 2 pulses. [29] In Fig. 1(c), it is shown that Duan-Giedke-Cirac-Zoller entanglement criterion [42] is satisfied at all pump rates. In Fig 1(d), it is shown that Adesso-Datta quantum discord criterion [43] is also satisfied at all pump rate. [29] Both results on entanglement and quantum discord demonstrate maximal quantum noise correlation formed at threshold pump rate p = 1. On the other hand, if a (fictitious) mean-field without quantum noise is assumed to couple two OPO pulses, there exists no quantum correlation below or above threshold, as shown by open circles in Fig. 1(d). 1. (a) An optical delay line couples two OPO pulses in ODL-CIM. [14] (b) Variances 〈∆ � 〉 and 〈∆ � 〉 in a MFB-CIM with = OPO pulses. The uncertainty product deviates from the Heisenberg limit by less than 30%. [41] (c) Duan-Giedke-Cirac-Zoller inseparability criterion ( / < ) vs. normalized pump rate p. Numerical simulations are performed by the positive-P, truncated-Wigner and truncated-Husimi stochastic differential equations (SDE). The dashed line represents an analytical solution. [29] (d) Adesso-Datta quantum discord criterion (Ɗ > 0) vs. normalized pump rate p. The above three SDEs and the analytical result predict the identical quantum discord, while the mean-field coupling approximation (MF-A) predicts no quantum discord. [29] Note that vacuum noise incident from an open port of XBS (See Fig. 1(a)) creates an opposite noise correlation between the internal and external OPO pulses, so that it always degrades the preferred quantum noise correlation among the two OPO pulses after IBS. Thus, squeezing the vacuum noise at open port of XBS is expected to improve the quantum search performance of an ODL-CIM, which is indeed confirmed in the numerical simulation. [28] The second generation of CIM demonstrated in 2016 employs a measurement-feedback circuit to all-to-all couple the N OPO pulses (see Fig. 1 of [16]). The (quadrature) amplitude of a reflected OPO pulse j after XBS is measured by an optical homodyne detector and the measurement result (inferred amplitude) � is multiplied against the Ising coupling coefficient Jij and summed over all j pulses in electronic digital circuitry, which produces an overall feedback signal ∑ � for the i-th internal OPO pulse. This analog electrical signal is imposed on the amplitude of a coherent optical feedback signal, which is injected into the target OPO pulse by IBS. In this MFB-CIM operating below threshold, if a homodyne measurement result � is positive and incident vacuum noise from the open port of XBS is negligible, the average amplitude of the internal OPO pulse j is shifted (jumped) to a positive direction by the projection property of such an indirect quantum measurement [44] , as shown in Fig. 2. Depending on the value of a feedback signal � , we can introduce either positive or negative displacement for the center position of the target OPO pulse i. In this way, depending on the sign of , we can implement either positive correlation or negative correlation between the two average amplitudes 〈 〉 and 〈 〉 for ferromagnetic or antiferromagnetic coupling, respectively. Note that a MFB-CIM does not produce entanglement among OPO pulses but generates quantum discord if the density operator is defined as an ensemble over many measurement records. [45] A normalized correlation function = 〈∆ � 1 ∆ � 2 〉 �〈∆ � 1 2 〉〈∆ � 2 2 〉 � is an appropriate metric for quantifying such measurement-feedback induced search performance, the degree of which is shown to govern final success probability of MFB-CIM more directly than the quantum discord. In general, a MFB-CIM has a larger normalized correlation function and higher success probability than an ODL-CIM. [45] FIG. 2. Formation of a ferromagnetic correlation between two OPO pulses in MFB-CIM. [15] [16] This example illustrates the noise distributions of the two OPO pulses when the Ising coupling is ferromagnetic ( > ) and the measurement result for the j-th pulse is � > . In both ODL-CIM and MFB-CIM, anti-squeezed noise below threshold makes it possible to search for a lowest-loss ground state as well as low-loss excited states before the OPO network reaches threshold. The numerical simulation result shown in Fig. 3 demonstrates the three step computation of CIM. [28] We study a = 16 one-dimensional lattice with a nearest-neighbor antiferromagnetic coupling and periodic boundary condition ( 1 = 17 ), for which the two degenerate ground states are |0⟩ 1 | ⟩ 2 ⋯ ⋯ |0⟩ 15 | ⟩ 16 and | ⟩ 1 |0⟩ 2 ⋯ ⋯ | ⟩ 15 |0⟩ 16 . We assume that vacuum noise incident from the open port of XBS is squeezed by 10 dB in ODL-CIM. When the external pump rate is linearly increased from below to above threshold, the probability of finding the two degenerate ground states is increased by two orders of magnitude above the initial success probability of random guess, which is 1 2 16~1 0 −5 ⁄ . This enhanced success probability stems from the formation of quantum noise correlation among 16 OPO pulses at below threshold. The probability of finding high-loss excited states, which are not shown in Fig. 3, is deceased to below the initial value. This "quantum preparation" is rewarded at the threshold bifurcation point. When the pump rate reaches threshold, one of the ground states (|0⟩ 1 | ⟩ 2 ⋯ ⋯ | ⟩ 16 ) in the case of Fig. 3 is selected as a single oscillation mode, while the other ground state (| ⟩ 1 |0⟩ 2 ⋯ ⋯ |0⟩ 16 ) as well as all excited states are not selected. This is not a standard single oscillator bifurcation but a collective phenomenon among = 16 OPO pulses due to the existence of anti-ferromagnetic noise correlation. Above threshold, the probability of finding the selected ground state is exponentially increased, while those of finding the unselected ground state as well as all excited states are exponentially suppressed in a time scale of the order of signal photon lifetime. Such exponential amplification and attenuation of the probabilities is a unique advantage of a gain-dissipative computing machine, which is absent in a standard quantum computing system. For example, the Grover search algorithm utilizes a unitary rotation of state vectors and can amplify the target state amplitude only linearly. [25] Note that if we stop increasing the pump rate just above threshold, the probability of finding either one of the ground states is less than 1%. Pitchfork bifurcation followed by exponential amplitude amplification plays a crucial role in realizing high success probability in a short time. For hard instances of combinatorial optimization problems, in which excited states form numerous local minima, the above quantum search alone is not sufficient to guarantee a high success probability. [30] In the next section, a new CIM with error correction feedback is introduced to cope with such hard instances. [26] An alternative approach has been recently proposed. [41] If a pump rate is held just below threshold (corresponding to ∽ 60 in Fig. 3), the lowest-loss ground states and lowloss excited states (fine solutions) have enhanced probabilities while high-loss excited states have suppressed probabilities. By using a MFB-CIM, the optimum as well as good sub-optimal solutions are selectively sampled through an indirect measurement in each round trip of the OPO pulses. This latter approach is particularly attractive if the computational goal is to sample not only optimum solutions but also semi-optimum solutions. Destabilization of local minima The measurement-feedback coherent Ising machine has been previously described as a quantum analog device that finishes computation in a classical digital device, in which the amplitude of a selected low energy spin configuration is exponentially amplified. [22][23] During computation, the sign of the measured in-phase component, noted � with � ∈ ℝ, is associated with the boolean variable of an Ising problem (whereas the quadrature-phase component decays to zero). A detailed model of the system's dynamics is given by the master equation of the density operator ρ that is conditioned on measurement results [46] [47] which describes the processes of parametric amplification (exchange of one pump photon into two signal photons), saturation (signal photons are converted back into pump photons), wavepacket reduction due to measurement, and feedback injection that is used for implementing the Ising coupling. For the sake of computational tractability, truncated Wigner [28] or the positive-P representation [48] can be used with Itoh calculus for approximating the quantum state Although gain saturation and dissipation can, in principle, induce squeezing and non-Gaussian states [49] that would justify describing the time-evolution of the higher moments of the probability distribution P, it is insightful to limit our description to its first moment (the average 〈 〉) in order to explain computation achieved by the machine in the classical regime. This approximation is justified when the state of each OPO remains sufficiently close to a coherent state during the whole computation process. In this case, the effect of gain saturation and dissipation on the average 〈 〉 can be modeled as a non-linear function ↦ ( ) and the feedback injection is given as (〈 〉 + ) where and are sigmoid functions, the Ising couplings, and represents the amplitude of the coupling. When the amplitudes |〈 〉| of OPO signals are much larger that the noise amplitude , the system can be described by simple differential equations given as Hamiltonian in the real space with = (〈 〉) . [21] [50] The connection between such nonlinear differential equations and the Ising Hamiltonian has been used in various models such as in the "soft" spin description of frustrated spin systems [51] or the Hopfield-Tank neural networks [50] for solving NP-hard combinatorial optimization problems. Moreover, an analogy with the mean-field theory of spin glasses can be made by recognizing that the steady-states of these nonlinear equations correspond to the solution of the "naive" Thouless-Anderson-Palmer (TAP) equations [52] which arise from the meanfield description of Sherrington-Kirkpatrick spin glasses in the limit of large number of spins and are given as 〈 〉 = tanh((1/ ) 〈 〉) with 〈 〉 the thermal average at temperature of the Ising spin (by setting ( ) = atanh( ) and ( ) = ). This analogy suggests that the parameter can be interpreted as inverse temperature in the thermodynamic limit when the Onsager reaction term is discarded. [52] At = 0 ( → ∞), the only stable state of the CIM is 〈 〉 = 0, for which any spin configuration is equiprobable, whereas at → ∞ ( = 0) , the state remains trapped for an infinite time in local minima. We will discuss in much more detail analogies between CIM dynamics and TAP equations, and also belief and survey propagation, in the special case of the SK model in the next section. In the case of spin glasses, statistical analysis of TAP equations suggests that the free energy landscape has an exponentially large number of solutions near zero temperature [53] and we can expect similar statistics for the potential when → ∞. In order to reduce the probability of the CIM to get trapped in one of the local minima of , it has been proposed to gradually increase , the coupling strength, during computation. [16] This heuristic, that we call open-loop CIM in the following, is similar to mean-field annealing [54] and consists in letting the system seeks out minima of a potential function that is gradually transformed from monostable to multi-stable (see Fig. 4(a) and (b1)). Contrarily to the quantum adiabatic theorem [55] or the convergence theorem of simulated annealing, [56] there is however no guarantee that a sufficiently slow deformation of will ensure convergence to the configuration of lowest Ising Hamiltonian. In fact, linear stability analysis suggests on the contrary that the first state other than vacuum state (〈 〉 = 0, ∀ ) to become stable as is increased does not correspond to the ground-state. Moreover, added noise may not be sufficient for ensuring convergence: [57] it is possible to seek for global convergence to the minima of the potential by reducing gradually the amplitude of the noise (with ( ) 2~/ log(2 + ) and real constant sufficiently large, [58] but the global minima of the potential ( ) do not generally correspond to that of the Ising Hamiltonians ( ) at a fixed . [13] [21] This discrepancy between the minima of the potential and Ising Hamiltonian H can be understood by noting that the field amplitudes 〈 〉 are not all equal (or homogeneous) at the steady-state, that is 〈 〉 = √ + where is the variation of the i-th OPO amplitude with ≠ and √ a reference amplitude defined such that = 0. Because of the heterogeneity in amplitude, the minima of (〈 〉) = ( √ + ) do not correspond to that of ( ) in general. Consequently, it is necessary in practice to run the open-loop Because the benefits of using an analog state for finding the ground-state spin configurations of the Ising Hamiltonian is offset by the negative impact of its improper mapping to the potential function , we have proposed to utilize supplementary dynamics that are not related to the gradient descent of a potential function but ensure that the global minima of are reached rapidly. In Ref [26], an error correction feedback loop has been proposed whose role is to reduce the amplitude heterogeneity by forcing squared amplitudes 〈 〉 2 to become all equal to a target value , thus forcing the measurement-feedback coupling { (〈 〉)} to be colinear with the Ising internal field with ℎ = . This can notably be achieved by introducing error signals, noted with ∈ ℝ, that modulate the coupling strength (or "effective" inverse temperature) of the i-th OPO such that = ( ) and the time-evolution of given as where is the rate of change of error variables with respect to the signal field. This mode of operation is called closed-loop CIM and can be realized experimentally by simulating the dynamics of the error variables using the FPGA used in the measurement-feedback CIM for calculation of the Ising coupling [16] (see Fig. 4(a)). Note that the concept of amplitude heterogeneity error correction has also been recently extended to other systems such as the XY model. [59] [60] In the case of the closed-loop CIM, the system exhibits steady-states only at the local minima of . [26] The stability of each local minima can be controlled by setting the target amplitude a as follows: the dimension of the unstable manifold (where is the number of unstable directions) at fixed points corresponding to local minima of the Ising Hamiltonian is equal to the number of eigenvalues ( ) that are such that ( ) > ( ) where ( ) are the eigenvalues of the matrix { /|ℎ |} (with internal field ℎ ) and a function shown in Fig. 5(a). The parameter can be set such that all local minima (including the ground-state) are unstable such that the dynamics cannot become trapped in any fixed point attractors. The system then exhibits chaotic dynamics that explores successively local minima. Note that the use of chaotic dynamics for solving Ising problems has been discussed previously, [24] [61] notably in the context of neural networks, and it has been argued that chaotic fluctuations may possess better properties than Brownian noise for escaping from local minima traps. In the case of the closed-loop CIM, the chaotic dynamics is not merely used as a replacement to noise. Rather, the interaction between nonlinear gain saturation and error-correction allows a greater reduction of the unstable manifold dimension of states associated with lower Ising Hamiltonian (see Fig. 5(b)). Comparison between Fig. 5(c1,d1,e1) and (c2,d2,e2) indeed shows that the dynamics of closed-loop CIM samples more efficiently from lower-energy states when the gain saturation is nonlinear compared to the case without nonlinear saturation, respectively. Generally, the asymmetric coupling between in-phase components and error signals possibly results in the creation of limit cycles or chaotic attractors that can trap the dynamics in a region that does not include the global minima of the Ising Hamiltonian. A possible approach to prevent the system from getting trapped in such non-trivial attractors is to dynamically modulate the target amplitude such that the rate of divergence of the velocity vector field remains positive. [26] This implies that volumes along the flow never contract which, in turn, prevents the existence of any attractor. Fig. 6(b)). Because there is no theoretical guarantee that the system will find configuration with Ising Hamiltonian at a ratio of the ground-state after a given computational time and the closed-loop CIM is thus classified as a heuristic method. In order to compare it with other state-of-the-art heuristics, the proposed scheme has been applied to solving instances of standard benchmarks (such as the G-set) by comparing time-to-solutions for reaching a predefined target such as the ground-state energy, if it is known, or the smallest energy known (i.e., published), otherwise. The amplitude heterogeneity error correction scheme can in particular find lower energy configurations of MAXCUT problems from the G-set of similar quality as the state-of-the-art solver, called BLS [62] (see the supplementary material of ref [26] for details). Moreover, the averaged time-to-solution obtained using the proposed scheme are similar to the ones obtained using BLS when simulated on a desktop computer, but are expected to be 100-1000 times smaller in the case of an implementation on the coherent Ising machine. Qualitative parallels between the CIM, belief propagation and survey propagation As we have noted above, the CIM approach to solving combinatorial optimization problems over binary valued spin variables = ±1 can be understood in terms of two key steps. First, in the classical limit of the CIM, the binary valued spin variables are promoted to analog variables reflecting the (quadrature) amplitude of the ℎ OPO mode and the classical CIM dynamics over the variables can be described by a nonlinear differential equation (Eq. 1). Second, in a more quantum regime, the CIM implements a quantum parallel search over this space that focuses quantum amplitudes on the ground state. A qualitatively similar two step approach of state augmentation and then parallel search has also been pursued in statistics and computer science based approaches to combinatorial optimization, specifically in the forms of algorithms known as belief propagation (BP) [63] and survey propagation (SP). [64] Here we outline similarities and differences between CIM, BP and SP. Forming a bridge between these fields can help progress through the cross-pollination of ideas in two distinct ways. First, our theoretical understanding of BP and SP may provide further tools, beyond the dynamical systems theory approaches described above, to develop a theoretical understanding of CIM dynamics. Second, differences between CIM dynamics and BP and SP dynamics may provide further inspiration for the rational engineering design of modified CIM dynamics that could lead to improved performance. Indeed there is a rich literature connecting BP and SP to other ideas in statistical physics, such as the Bethe approximation, the replica method, the cavity method, and TAP equations. [65][66] [67][68] [69] It may also be interesting to explore connections between these ideas and the theory of CIM dynamics. (3) below can be visualized as a factor graph, with circular nodes denoting the variables and square factor nodes denoting the interactions ( Fig. 7(a)). A variable node is connected to a factor node if and only if variable belongs to the subset , or equivalently if the interaction term depends on . BP can then be viewed as an iterative dynamical algorithm for computing a marginal ( ) by passing messages along the factor graph. In the case of combinatorial optimization, we can focus on the zero temperature → ∞ limit. We will first describe the BP algorithm intuitively, and later give justification for it. BP employs two types of messages: one from variables to factors and another from factors to variables. Each message is a probability distribution over a single variable. We denote by For a general factor graph, there is no guarantee that the BP update equations will converge in finite time, and even if they do, there is no guarantee the converged messages will yield accurate marginal distributions. However, if the factor graph is a tree, then it can be proven that the BP update equations do indeed converge, and moreover they converge to the correct marginals. [63] Moreover, even in graphs with loops, the fixed points of the BP update equations were shown to be in one to one correspondence with extrema of a certain Bethe free energy approximation to the true free energy associated with the factor graph distribution. [70] This observation yielded a seminal connection between BP in computer science, and the Bethe approximation in statistical physics. The exactness of BP on tree graphs, as well as the variational connection between BP and Bethe free energy on graphs with loops, motivated the further study of BP updates in sparsely connected random factor graphs in which loops are of size O(log N). In many such settings BP updates converge and yield good approximate marginals. [65] In particular, if correlations between variables ∈ adjacent to a factor are weak upon removal of that factor, then BP is thought to work well. Fig. 7(c)). Thus we can write the BP update equations for Ising systems solely in terms of one of the messages, which we rename to be → ≡ → . Thus for each connection in the Ising system, there are now two magnetizations: → and → corresponding to messages flowing along the two directions of the connection. Intuitively, → is the magnetization of spin in a cavity system where the coupling has been removed. Similarly, → is the magnetization of spin in the same cavity system with coupling removed. Some algebra reveals [65][67] that the BP equations in terms of the cavity magnetizations → are given by Here the sum over ∈ / denotes a sum over all neighbors of spin other than spin . See The BP equations for Ising systems can also be used to derive the famous TAP equations [71] for the Sherrington Kirkpatrick (SK) model, [34] which is an Ising spin glass with a dense all-to-all mean field connectivity where each coupling constant is chosen i.i.d from a zero mean Gaussian for the case of dense mean field connectivity solely in terms of the variables +1 (see [67] for a derivation of the TAP equations from this BP perspective): This achieves a dramatic simplification in the dynamics of Eq. 3 from tracking 2 2 variables to only tracking N variables, and as such is more similar to the CIM dynamics in Eq. 1. Again there are still several differences: the dynamics in Eq. 4 is discrete time, uses a different nonlinearity, and has an interesting structured history dependence extending over two time steps. Remarkably, although BP was derived with the setting of sparse random graphs in mind, the particular form of the approximate BP equations for the dense mean field SK model can be proven to converge to the correct magnetizations as long as the SK model is outside of the spin glass phase. [72] So far, we have seen a set of analog approaches to solving Ising systems in specialized cases (sparse random and dense mean field connectivities). However, these local update rules do not work well when such connectivities exhibit spin glass behavior. It is thought that the key impediment to local algorithms working well in the spin glass regime is the existence of multiple minima in the free energy landscape over spin configurations. [65] This multiplicity yields a high reactivity of the spin system to the addition or flip of a single spin. For example, if a configuration is within a valley with low free energy, and one forces a single spin flip, this external force might slightly raise the energy of the current valley and lower the energy of another valley that is far away in spin configuration space but nearby in energy levels, thereby making these distant spin configurations preferable from an optimization perspective. In such a highly reactive situation, flipping one spin at a time will not enable one to jump from valleys that were optimal (lower energy) before the spin flip, to a far away valley that is now more optimal (even lower energy) after the spin flip. This physical picture of multiple valleys that are well separated in spin configuration space, but whose energies are near each other, and can therefore reshuffle their energy orders upon the flips of individual spins, motivated the invention of new algorithms that extend belief propagation to survey propagation. The key idea, in the context of an Ising system, is that the magnetizations → of BP now correspond to the magnetizations of spin configurations in a single free energy valley (still in a cavity system with the coupling removed). SP goes beyond this to keep track of the distribution of BP messages across all the free energy valleys. We denote this distribution at iteration by ( → ). The distribution over BP beliefs is called a survey. SP propagates these surveys, or distributions over the BP messages across different valleys, taking into account changes in the free energy of the various valleys before and after the addition of a coupling . This more nonlocal SP algorithm can find solutions to hard constraint satisfaction problems in situations where the local BP algorithm fails. [64] Furthermore, recent work going beyond SP, but specialized to the SK model, yields message passing equations that can probably find near ground state spin configurations of the SK model (under certain widely believed assumptions about the geometry of the SK model's free energy landscape) but with a time that grows with the energy gap between the found solution and the ground state. [35] Interestingly, the promotion of the analog magnetizations → +1 of BP to distributions ( → ) over these magnetizations is qualitatively reminiscent of the promotion of the classical analog variables of the CIM to quantum wavefunctions over these variables. However this is merely an analogy to be used as a potential inspiration for both understanding and augmenting current quantum CIM dynamics. Moreover, the SP picture cannot account for quantum correlations. Overall, much further theoretical and empirical work needs to be done in obtaining a quantitative understanding the behavior of the CIM in the quantum regime, and the behavior of SP for diverse combinatorial Ising spin systems beyond the SK model, as well as potential relations between the two approaches. An intriguing possibility is that the quantum CIM dynamics enables a nonlocal parallel search over multiple free energy valleys in a manner that may be more powerful than the SP dynamics due to the quantum nature of the CIM. Future Outlook While current MFB-CIM hardware implementations would not seem capable of sustaining even limited transient entanglement because of their continual projection of each spin-amplitude on each round trip, it is possible that near-term prototypes could probe quantum-perturbed CIM dynamics at least in the small-regime. A recent analysis [73] of a modified MFB-CIM architecture utilizing entanglement swapping-type measurements shows that it should be possible to populate entangled states (of specific structure determined by the measurement configuration) of the spin-amplitudes, if the round-trip optical losses can be made sufficiently small. This type of setup could be used to enable certain entanglement structures to be created by transient non-local flow of quantum states through phase space, or to create specific entangled initial states for future CIM algorithms that exploit quantum interference in some more directed way. One may speculate that the impact of quantum phenomena could become more pronounced in CIMs with extremely low pump threshold, for which quantum uncertainties could potentially be larger relative to the scale of topological structures in the mean-field (in a quantum-optical sense) phase space in the critical near-threshold regime. Prospects for realizing such low-threshold CIM hardware have recently been boosted by progress towards the construction of optical parametric oscillators using dispersion-engineered nanophotonic lithium niobate waveguides and ultra-fast pump pulses. [74] For methods that rely on the relaxation of a potential function, either a Lyapunov function for dynamical systems or free energy landscape for Monte Carlo simulations, it is generally believed that the exponential increase in the number of local minima is responsible for the difficulty in finding the ground-states. It has been suggested that the presence of an even greater number of critical points may prevent the dynamics from descending rapidly to lower energy states. [75] On the other hand, several recently proposed methods that rely on chaotic dynamics instead of a potential function have achieved good performance in solving hard combinatorial problems, [ [78] but the theoretical description of the number of non-trivial traps (limit-cycles or chaotic attractors) in their dynamics is lacking. It is of great interest to extend the study of complexity [75] (that is, the enumeration of local minima and critical points) to the case of chaotic dynamics for identifying the mechanisms that prevent these novel heuristics to find optimal solutions of combinatorial optimization problems and to derive convergence theorems and guarantees of returning solutions within a bounded ratio of the ground-state energy. The closed-loop CIM has been proposed for improving the mapping of the Ising Hamiltonian when the time-evolution of the system is approximated to the first moment of the in-phase component distribution. Because the CIM has the potential of quantum parallel search [22] if dissipation can be reduced experimentally, it is important to extend the description of the closed-loop CIM to higher moments in order to identify possible computational benefits of squeezed or non-Gaussian states. In order to investigate this possibility but abstain from the difficulties of reaching a sufficiently low dissipation experimentally, the simulation of the CIM in digital hardware is necessary. Another interesting prospect of the CIM is its extension to neuroscience research. One possibility is about merged quantum and neural computing concept. In the quantum theory of CIM, we start with a density operator master equation which takes into account a parametric gain, linear loss, gain saturation (or back conversion loss) and dissipative mutual coupling. By expanding the density operator with either a positive P-function (off-diagonal coherent state expansion), truncated Wigner-function or Husimi-function, we can obtain the quantum mechanical Fokker-Planck equations. Using the Ito rule in the Fokker-Planck equations, we finally derive the c-number stochastic differential equations (c-SDE). We can use them for numerical simulation of the CIM on classical digital computers. This phase space method of quantum optics can be readily modified for numerical simulation of an open-dissipative classical neural network embedded in thermal reservoirs, where vacuum noise is replaced by thermal noise. We note that an ensemble average over many identical classical neural networks driven by independent thermal noise can reproduce the analog of quantum dynamics (entanglement and quantum discord) across bifurcation point. This scenario suggests a potential "quantum inspired computation" might be already implemented in the brain. Using the c-SDE of CIM as heuristic algorithm in classical neural network platform, we can perform a virtual quantum parallel search in cyber space. In order to compute the dynamic evolution of the density operator, we have to generate numerous trajectories by c-SDE. This can be done by ensemble averaging or time averaging. However, what we need in the end is only the CIM final state, which is one of degenerate ground states, and in such a case, producing just one trajectory by c-SDE is enough. This is the unique advantage of the CIM approach and provided by the fact that this system starts computation as a quantum analog device and finishes it as a classical digital device. It is an interesting open question if the classical neural network in the brain implements such c-SDE dynamics driven by thermal reservoir noise. One of the important challenges in theoretical neuro-science is to answer how large number of neurons collectively interact to produce a macroscopic and emergent order such as decision making, cognition and consciousness via noise injected from thermal reservoirs and critical phenomena at phase transition point. [79][80][81] [82] The quantum theory of the CIM may shed a new light on this interesting frontier at physics and neuro-science interface. Above we also reviewed a set of qualitative analogies connecting the CIM approach to combinatorial optimization with other approaches in computer science. In particular, we noted that just as the CIM dynamics involves a promotion of the original binary spin variables to classical analog variables and then quantum wave functions associated with these classical variables, computer science based approaches to combinatorial optimization also involve a promotion of the spin variables to analog variables (cavity magnetizations in BP for sparse random connectivities and magnetizations in TAP for dense mean field connectivities), and then distributions over magnetizations in SP. These analogies form a bridge between two previously separate strands of intellectual inquiry, and the crosspollination of ideas between these strands could yield potential new insights in both fields. In particular such cross-pollination may both advance the scientific understanding of and engineering improvements upon CIM dynamics. More generally, we hope this article provides a sense of the rich possibilities for future interdisciplinary research focused around a multifaceted theoretical and experimental approach to combinatorial optimization uniting perspectives from statistics, computer science, statistical physics, and quantum optics, and making contact with diverse topics like dynamical systems theory, chaos, spin glasses, and belief and survey propagation.
12,398.4
2020-06-10T00:00:00.000
[ "Physics", "Computer Science" ]
Studies of a New-style Resonator on Electro-mechanical Coupling Bandgap Control of a Locally Resonant Piezoelectric/elastic Phononic Crystal Double-layer Nonlocal Nanobeam The paper proposed a model of a locally resonant (LR) piezoelectric/elastic phononic crystal (PC) nanobeam with periodically attached “spring - mass” resonator and additional spring between upper and lower nanobeams, as well as horizontal spring between mass and foundation. Euler beam theory and nonlocal piezoelectricity theory are coupled and introduced to plane wave expansion (PWE) method to calculate the band structures of such a model with different parameters. Numerical results and further analysis demonstrate that all the bands of double - layer nanobeam can be divided into symmetric and antisymmetric ones. Adding additional and horizontal springs play a role in control the symmetric and anti symmetric bands respectively, which make wider band gaps be opened than corresponding single - layer nanobeam. Moreover, the change of parameters of electro - mechanical coupling fields and resonator can be applied to effectively control the starting frequencies and widths of band gaps, which can provide a theoretical basis for active control of vibration. Effects of geometric and non - dimensional nonlocal parameters on band gaps are also discussed. All the studies are expected to be applied to actively control vibration propagation in the field of nano electro - mechanical system (NEMS). design ideas of them are coupled and introduced to double-layer structures. As far as it goes, the piezoelectric PC nanostructures were directly studied slightly. By introducing nonlocal piezoelectricity theory to transfer matrix (TM) method, the plane [6] , symmetric [7] and anti-plane transverse wave modes [8] of one-dimensional (1D) layered piezoelectric PC nanostructures were studied by Yan, Chen and Wang et al.. Based on PWE method, the band structures of two-dimensional (2D) piezoelectric PC nanostructures with different types of scatters and lattices were calculate and corresponding bandgap properties were researched by Miranda Jr and Dos Santos [9] . By applying PWE method, the effects of nonlocal effects [10] , surface effects [11] , nonlinearity [12] and locally resonance [13] on electro-mechanical coupling wave propagation characteristics of piezoelectric PC nanobeams were investigated by Qian et al.. Consequently, deep researches are needed for piezoelectric PC nanostructures. In recent years, the dynamic characteristics of piezoelectric nanostructures and wave propagation properties of PC structures have obtained wide studies. In order to overcome the size dependence existing in nanostructures, classical continuum elasticity theory has been revised to form several higher-order continuum elasticity theories [6,11,14,15] . As for the common theory, nonlocal elasticity theory has been further developed to nonlocal piezoelectricity theory to study the mechanical characteristics of piezoelectric nanostructures [16,17] . PC structures at macroscopical size, proposed in thirty years ago, have been widely introduced to some basic elastic structures in engineering such as beam, plate and so on [18,19] . In order to overcome the disadvantage that the frequency ranges of band gaps are fixed if the PC structures are manufactured, multi-physics coupling PC structures have been proposed, such as: piezoelectric, piezomagnetic, magneto-electro-elastic PCs and so on [20][21][22] . Particularly, band gaps can be effectively controlled to piezoelectric PCs by converting the electric and mechanical fields. Moreover, with the rapid development of nanotechnology in different fields, PC structures at nanoscale have been proposed and studied, which can make the order of magnitude of bandgap frequency range sharply increase to be superhigh (gigahertz (GHz) even terahertz (THz)) [23,24] . On the basis of existing researches on different PC structures and piezoelectric nanostructures, PWE method is extended to calculate the band structure of a proposed LR piezoelectric/elastic PC Euler double-layer nonlocal nanobeam with horizontal and additional springs periodically attached in this paper. Moreover, in order to reveal the unique tunability of such a new-style resonator on band gaps, the formation mechanisms and influence rules on band gaps of first two band gaps are studied in detail. II. MODEL AND METHOD As shown in Fig. 1, the proposed LR piezoelectric/elastic PC double-layer nanobeam is composed of two parts: base double-layer nanobeam and resonator. The base double-layer nanobeam is formed by periodically repeating a piezoelectric material PZT-4 and an elastic material epoxy in the axial direction. The resonator is formed by four parts: mass R , vertical spring , horizontal spring and additional spring A . In a resonator, the mass is connected by two vertical springs and two horizontal springs. Generally, the foundation hardly vibrated is occurred in a mechanical system. By periodically attaching the vertical springs and additional springs onto the midpoint of each PZT-4 part of double-layer nanobeam and the horizontal springs on to the foundation, such a model is formed. The Cartesian coordinate system is set up as shown in Fig. 1(b). Here, all the horizontal springs are located at y-direction when the model is balanced. The original length of each horizontal spring is and the length between mass and foundation is . If < , each horizontal spring is pre-compressed in the equilibrium position. Each PZT-4 is applied by an external electrical voltage and each nanobeam is applied by an external axial force 0 . The lengths of each PZT-4 and epoxy are 1 and 2 , respectively. The lattice constant of such a PC double-layer nanobeam is = 1 + 2 . Besides, the width and thickness of each nanobeam with rectangular cross section are and ℎ, respectively. As shown in the figure, the position of each resonator can be expressed as: = 1 /2 + ̅ , where ̅ = 0, ±1, ±2, ⋯. Moreover, Table I gives all the material parameters of PZT-4 and epoxy used in the following calculations. In Table I, denotes the mass density, which can be further divided into 1 and 2 to expressed PZT-4 and epoxy, respectively. represents the elastic modulus of epoxy. 11 , 31 and 33 are the elastic, piezoelectric and dielectric constants of PZT-4, respectively. The time-harmonic flexural vibration governing equations of proposed LR piezoelectric/elastic PC Euler double-layer nonlocal nanobeam can be written as [10,11] : where Here, l ( ) and u ( ) are used to represent the flexural displacements of lower-layer and upper-layer nanobeams along z-direction, respectively. l ( ) and u ( ) are the flexural displacements of lower-layer and upper-layer nanobeams at X, respectively. R ( ) is the displacement of mass along z-direction at X. l ( ) and u ( ) are the forces applied to the lower-layer and upper-layer nanobeams by vertical and additional springs at X, respectively. R ( ) is the force applied to the mass by vertical and horizontal springs at X. Moreover, ( − ) is the one-dimensional (1D) delta function and is the vibration frequency. is a nonlocal coefficient used to represent the nonlocal effects based on nonlocal piezoelectricity theory, and the derived parameter = / can be further applied to represent the ratio between nonlocal size and lattice constant [10] . ( ), ( ) and ( ) are used to uniformly express parameters ( 1 , 2 ), ( 1 , 2 ) and ( 1 , 2 ) in such a PC double-layer nanobeam, the details are as follows: (2), the second item of R ( ) is nonlinear, which can be expanded by Fourier series and ignored second and higher order terms. Finally, Eq. (2) can be linearized as: where non-dimensional parameter = / is used to represent the degree of pre-compression. Based on the periodicity of nanobeam in x-direction, ( ) ( = , , ) can be expressed in spatial Fourier series as: where is the 1D reciprocal-lattice vector. Here, ( ) can be expressed as: where = 1 / represents the filling ratio of PZT-4, and ( ) = sin( 1 /2)/( 1 /2). According to the Bloch theory and periodicity of model, l ( ) and u ( ) can be expressed as: where is the Bloch wave vector limited in the irreducible first Brillouin zone (1BZ), and ′ is also the 1D reciprocal-lattice vector. Moreover, the Bloch theory and periodicity of model imply that: The delta function ( − ) suggests the following relations: By substituting = 0 to Eq. (12), it obtains that: By substituting Eqs. (9), (10), (12)-(15) to (1), it gives: If the number of reciprocal-lattice vectors is picked as , Eq. (16) can be rewritten by a matrix formulation as: where Eq. (17) is a typical generalized eigenvalue problem for 2 . By solving the equation for each Bloch wave vector limited in 1BZ, the band structure of the proposed LR piezoelectric/elastic PC Euler double-layer nonlocal nanobeam can be obtained finally. III. NUMERICAL RESULTS AND ANALYSES A. Complete, symmetric and antisymmetric band structures horizontal and additional springs attached, which the corresponding band structure is shown in Fig. 2. If the upper plate is removed, the LR piezoelectric/elastic PC single-layer nonlocal nanobeam can be obtained, which the corresponding band structure is also shown in Fig. 2 for comparison. During the calculations, 0 = 1 × 10 −8 N, = 1V, R = 1 × 10 −18 kg, = 1 × 10 2 N/m, = 0.1, 1 = 2 = 50nm, = ℎ = 10nm. As shown in the figure, the addition of upper plate makes the original second, third and fourth bands in single-layer nonlocal nanobeam be divided into two bands. Here, the band gaps of first two orders are researched. For the first band gap, the single-layer nonlocal nanobeam has the lower starting frequency and wider bandgap width. For the second one, the single-layer nonlocal nanobeam has the higher ending frequency and wider bandgap width. Hence, by comparing LR piezoelectric/elastic PC single-layer nonlocal nanobeam with the double-layer one, it seems that single-layer nanobeam always occupies an absolute dominance on opening wider band gaps. Assuming that the double-layer nanobeam is vibrated in symmetric mode, the constraint conditions l ( ) = − u ( ) and R ( ) = 0 should be introduced to Eq. (1), then the computational formula of symmetric band structure can be obtained by simplifying Eq. (17) as: (25) Fig. 3(a) and (b) give the band structures of LR piezoelectric/elastic PC Euler double-layer nonlocal nanobeam with no horizontal and additional springs attached vibrated in symmetric and antisymmetric modes, respectively. As a comparison, the original complete band structure is also displayed. During the calculations, all the parameters are same to those in Fig. 2. As shown, all the bands can be divided into two parts: symmetric and antisymmetric bands. For the bands vibrated in symmetric mode, they cannot be affected by mass because the mass keeps still in such a vibration mode. If the additional springs A are added in Fig. 1, the symmetric bands can be adjusted but with the antisymmetric ones unaffected. Besides, if the horizontal springs attached onto foundation are added, the antisymmetric bands can be adjusted but with the symmetric ones unaffected. B. Influences of additional and horizontal springs on band structures The band structure of LR piezoelectric/elastic PC Euler double-layer nonlocal nanobeam with additional springs A attached is shown in Fig. 4(a). As a comparison, the band structure of the same nanobeam but with no additional springs attached is also displayed, which is divided to symmetric and antisymmetric bands. Here, all the parameters except for A = 10N/m are same to those in Fig. 2. As shown, adding additional springs has no influence on antisymmetric bands. However, the symmetric bands are moved upwards to higher frequency region, which can be understood that adding additional springs between upper and lower nanobeams can restrain the symmetric vibration mode and strengthen the equivalent stiffness. As a result, adding additional springs widens the second band gap by increasing the ending frequency. Fig. 4(b) gives the influences of A on starting frequencies s , widths w and total width of first two band gaps. Here, all the parameters except for A are same to those in Fig. 2, the range of A is from 0N/m to 20N/m. The starting frequency and width of first band gap keep unchanged with the increase of A because the starting and ending frequencies are located at the antisymmetric bands as shown in Fig. 4(a). The starting frequency of second band gap also keeps unchanged with the increase of A because it is located at the antisymmetric band as shown in Fig. 4(a). Besides, the width of second band gap keeps increasing firstly and then unchanged with the increase of A , which can be regarded that the ending frequency of such a band gap is located at the symmetric band if A ≤ 10N/m, but the ending frequency is located at the antisymmetric band if A > 10N/m as shown in Fig. 4(a). Hence, the total bandgap width also keeps increasing firstly and then unchanged. and non-dimensional parameter = 2 are same to those in Fig. 2. As shown, adding horizontal springs has no effect on the symmetric bands because the masses are static in such a vibrational mode. In addition, adding horizontal springs makes the antisymmetric bands move down, which can be understood that the impact of adding pre-compressed horizontal springs is decreasing the original stiffness of vertical springs . By comparing Figs. 5(a) and (b), only the first band can be affected but with the influence of other antisymmetric bands not obviously if the value of is small. Only if the value of is big enough, the effect of on other antisymmetric bands cannot be ignored. The phenomenon can be attributed to that the value of equivalent stiffness between vertical spring and nanobeam increases with the increase of band order, then larger value of is needed to decrease the value of equivalent stiffness. Fig. 5(c) gives the influence of and on starting frequency s of first band gap. Here, all the parameters except for and are same to those in Fig. 2, the range of is from 0N/m to 1.5N/m, and the range of is from 1 to 2.5. As shown in the figure, lager value of and can obtain lower frequency of first band gap, which can be regarded that with the increase of and , the degree of pre-compression increases, the equivalent stiffness between springs and double-layer nanobeam decreases. FIG. 6. Band structures of LR piezoelectric/elastic PC Euler single-layer and double-layer nonlocal nanobeams with both horizontal springs and additional springs A attached. Fig. 6 displays the band structures of LR piezoelectric/elastic PC Euler single-layer and double-layer nonlocal nanobeams with both horizontal springs and additional springs A attached. Here, all the parameters except for = 2, = 1.5N/m and A = 10N/m are same to those in Fig. 2. As shown, for the first band gap, the double-layer nonlocal nanobeam has the lower starting frequency and wider bandgap width than single-layer one. For the second one, the double-layer nonlocal nanobeam has the higher ending frequency and wider bandgap width than single-layer one. By comparing Figs. 6 with 2, adding horizontal and additional springs can widen the band gaps and make the band gaps be controlled, which can be regarded as the advantage that single-layer one doesn't have. Moreover, with the change of resonator parameters, the first symmetric band and second antisymmetric band of double-layer nanobeam are always coincident, which are also coincident with the second band of single-layer nanobeam. The phenomenon can be understood that the position attached resonators in the nanobeams is non-vibrating in such vibration modes corresponding to the bands, which can be further reduced to a single-layer nanobeam with resonators removed. C. Influences of parameters on band structures The influences of electrical voltage and axial force 0 on starting frequencies s , widths w and total width of first two band gaps are shown in Figs. 7(a) and (b), respectively. Here, all the parameters except for and 0 are same to those in Fig. 6. The range of is from −1V to 1V, and the range of 0 is from 1 × 10 −8 N to 5 × 10 −8 N. With the increase of , both the starting frequencies of first two band gaps keep decreasing and both the widths of first two band gaps keep increasing. Finally, the total width of first two band gaps keeps increasing by increasing . With the increases of 0 , both the starting frequencies of first two band gaps keep increasing, the width of first band gap keeps increasing and the second one keeps decreasing. Finally, the total width of first two band gaps keeps decreasing by increasing 0 . on starting frequencies s , widths w and total width of first two band gaps, respectively. The influences of mass R and vertical spring on starting frequencies s , widths w and total width of first two band gaps are shown in Figs. 8(a) and (b), respectively. Here, all the parameters except for R and are same to those in Fig. 6. The range of R is from 0kg to 3 × 10 −18 kg, and the range of is from 0N/m to 3 × 10 2 N/m. As shown in Fig. 8(a), with the increase of R , the starting frequency of first band gap keeps decreasing, which can be understood that the equivalent mass of resonators increases by increasing R . The ending frequency is located at the symmetric band as shown in Fig. 3(a) that unaffected by mass, which leads the width of first band gap to keep increasing. With the increase of R , the starting frequency of second band gap keeps unchanged because it is also located at the symmetric band as shown in Fig. 3(a). The width of second band gap keeps static firstly and then decreasing by increasing R , which can be regarded that the ending frequency of second band gap is also located at the symmetric band if R ≤ 1 × 10 −18 kg, and it is also located at the antisymmetric band if R > 1 × 10 −18 kg. Finally, the total width of first two band gaps keeps increasing firstly and then decreasing with the increase of R . Moreover, as shown in Fig. 8(b), with the increase of , the ending frequency of first band gap and starting frequency of second band gap keep unchanged because they are located at the bands with resonators ineffective, which is revealed in the above section. Besides, with the increase of , the starting frequency of first band gap and sending frequency of second band gap keep increasing because the equivalent stiffness of resonators increases by increasing . Finally, with the increase of , the width of first band gap keeps decreasing, the second and total ones keep increasing. The influences of non-dimensional parameters 1 / 2 and ℎ/ on starting frequencies s , widths w and total width of first two band gaps are shown in Figs. 9(a) and (b), respectively. Here, all the parameters except for 1 and ℎ are same to those in Fig. 6. The range of 1 / 2 is from 0 to 4, and the range of ℎ/ is from 0.5 to 2. As shown in Fig. 9(a), with the increase of (a) (b) FIG. 9. Influences of non-dimensional parameters (a) 1 / 2 and (b) ℎ/ on starting frequencies s , widths w and total width of first two band gaps, respectively. The influences of non-dimensional parameter on starting frequencies s , widths w and total width of first two band gaps is shown in Figs. 10. Here, all the parameters except for are same to those in Fig. 6. The range of is from 0 to 0.5. As shown, with the increase of , the starting frequency of first band gap is zero firstly, then keeps increasing and finally decreasing to zero, the width of first band gap keeps decreasing, the starting frequency of second band gap keeps decreasing, and the width of second band gap keeps increasing firstly and then decreasing. Finally, the total width of first two band gaps keeps increasing firstly and then decreasing by increasing . FIG. 10. Influences of non-dimensional parameter on starting frequencies s , widths w and total width of first two band gaps. IV. CONCLUSIONS In this paper, the band structure of a proposed LR piezoelectric/elastic PC Euler double-layer nonlocal nanobeam with horizontal and additional springs periodically attached is calculated based on PWE method. The main properties of band gaps are reveled as follows: 1. If horizontal and additional springs are not attached, the corresponding single-layer nanobeam can open wider band gaps than double-layer one. However, if horizontal and additional springs are attached, double-layer nanobeam is better with the whole quality of nanobeam not increased. 2. All the bands of double-layer nanobeam can be divided into symmetric and antisymmetric ones. Adding additional springs can effectively control the symmetric bands, and adding horizontal springs can effectively adjust the antisymmetric bands. By increasing the stiffness of additional spring, the width of second band gap increases. By increasing the stiffness of horizontal spring or non-dimensional pre-compressed parameter, the width of first band gap increases. 3. The starting frequencies and widths of band gaps can be effectively controlled by electrical voltage, axial force and the parameters of resonator, which can be further applied to realize the active control of vibration. Moreover, the influence rules of geometric parameters and non-dimensional nonlocal parameter on band gaps are also revealed. Availability of data and materials Not applicable. Competing interests The authors declare that they have no competing interests. See the Manuscript Files section for the complete gure caption. Figure 10 See the Manuscript Files section for the complete gure caption.
5,016.2
2021-05-20T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Propagation-based Phase-Contrast X-ray Imaging at a Compact Light Source We demonstrate the applicability of propagation-based X-ray phase-contrast imaging at a laser-assisted compact light source with known phantoms and the lungs and airways of a mouse. The Munich Compact Light Source provides a quasi-monochromatic beam with partial spatial coherence, and high flux relative to other non-synchrotron sources (up to 1010 ph/s). In our study we observe significant edge-enhancement and quantitative phase-retrieval is successfully performed on the known phantom. Furthermore the images of a small animal show the potential for live bio-imaging research studies that capture biological function using short exposures. Scientific RepoRts | 7: 4908 | DOI: 10.1038/s41598-017-04739-w point. This medium-sized FOV allows for imaging samples of several millimeters to centimeters in width. In particular, a whole mouse lung fits within this field of view. A fundamental step for a significant contrast enhancement in X-ray imaging has been achieved by the transfer of the phase-contrast imaging principle from visible light to X-rays in 1965 by Bonse and Hart 5 . Particularly with the advent of synchrotrons several phase sensitive methods have arisen, including X-ray grating interferometry [6][7][8] , analyser-based refraction contrast 9 and propagation-based X-ray phase-contrast imaging 10,11 . These phase-contrast mechanisms are particularly useful when imaging objects that deliver insufficient contrast in absorption imaging. They have been applied in biomedical research (e.g. imaging soft tissue, airways), material science (e.g. imaging low Z materials) and show potential as a new clinical diagnostic tool 2 . In order to investigate the potential of the MuCLS for time-sequence imaging of respiratory processes, we focus here on propagation-based imaging (PBI). PBI is particularly useful in this context because only a single exposure is required to produce an image, whereas other techniques, for example grating-based imaging, often require several exposures to reconstruct a phase-contrast image. It would also be possible to conduct single-exposure techniques on this setup that utilise a single-grid 12,13 or speckle-tracking 14,15 , but we leave those topics for future studies. While PBI experiments do not require a strictly monochromatic beam, and may thus also be performed at conventional laboratory sources 16 , an X-ray beam with a certain amount of spatial coherence is necessary, hence such laboratory sources must have a small effective source size. In addition, many research applications require high flux to avoid motion blur and/or capture changes in biomedical structure or function. In biological specimens, one may want to capture motion (e.g. of the lungs 17,18 or of inhaled particles moving along airways 19 ) or a time-resolved response to a treatment (e.g. in the depth of liquid lining the airways 20 ). A small focal spot in a conventional X-ray source will often mean the flux is limited by the heat load on the target, although this can be circumvented to an extent by using, for example, a liquid metal jet target 21,22 , which also produces high brilliance X-rays 23 . Magnification of the sample when using a divergent beam (e.g. from an X-ray target) typically means larger detector pixels are used than at the synchrotron, and in the case of a X-ray/optical scintillator system, a thicker scintillator. This results in a more efficient detector system, enabling high speed imaging. However, magnification can also affect the ability to capture phase fringes in a flux-efficient dynamic imaging setup, as explored in the discussion of this paper. Dynamic PBI experiments have so far taken place at synchrotrons, but the development of high-flux compact X-ray sources like the MuCLS and the liquid metal jet source suggest that these experiments may be now possible in the laboratory. In this report we show that the flux and coherence of the quasi-monochromatic X-ray beam provided by the MuCLS allows phase-contrast imaging by simply increasing the sample-to-detector distance. We demonstrate that short exposure times (∼50 ms) are possible at a cm-sized FOV in a laboratory environment with a resolution of around 50 μm. In the following section, the first results of propagation-based phase-contrast X-ray imaging Scientific RepoRts | 7: 4908 | DOI:10.1038/s41598-017-04739-w obtained at the MuCLS are presented on well-known phantoms (nylon fibers and perspex spheres) and a biomedical sample (a mouse). Results By extending the free space propagation between the sample and the detector up to 2 m, the X-ray wavefield diffracts and self-interferes to produce characteristic edge-enhanced images. This effect is clearly visible with the Nylon fiber shown in Fig. 2 and the perspex spheres shown in Fig. 3, good examples of light density materials with X-ray properties comparable to soft tissue. We also observe phase effects from the biomedical sample in Figs 4 and 5, the lungs and respiratory tract of a mouse. The images are obtained with two different detector systems, which allow us to choose between different resolutions in the micron range and different fields of view. The systems are described in detail in the methods section. The beam divergence of 4 mrad leads to a medium sized beam Nylon thread. Nylon fibers with a diameter of 350 μm were imaged with the high resolution setup. Figure 2 shows the results for propagation distances between contact and 150 cm. At an energy of 25 keV the fibers can be considered as basically pure phase objects, as their absorption contrast is very weak (e.g. Fig. 2A). However, the increasing phase contrast is clearly visible in Fig. 2(B-D), primarily at the edges where the phase gradient is large. In Fig. 2(E) the line profiles for the different distances are shown. The fringes increase in width and intensity with increasing sample-detector distance. We can adjust the sample-detector distance by balancing the increasing fringe width and visibility seen with the smoothing effects of the source width. For this 0.65 μm pixel system, propagation distances in the cm range (below 50 cm) deliver the best edge contrast. Due to the source blurring at larger distances the fringes are smeared out, but these wider fringes are useful in the case of a larger pixel size. PMMA spheres. Here we show that the propagation-based phase-contrast images of phantoms on this setup can provide quantitative sample thickness values. PMMA spheres with a diameter of 1.5 mm are imaged with a detector pixel size of 6.5 μm, a sample-detector distance of 1 m and exposure time of 120 s. Figure 3(A) shows the edge-enhanced image. In Fig. 3(B) the projected thickness of the spheres is reconstructed with the single-distance phase-retrieval algorithm developed by Paganin et al. 24 . Using this algorithm the thickness of a sample of a single known material can be reconstructed. For the PMMA spheres (C5O2H8) the complex refractive index values δ = 4.228e-07 and β = 1.796e-10 are used for the 25 keV image. The intensity profiles along the lines shown in Fig. 3(B) are plotted in Fig. 3(C), alongside the theoretical thickness for a sphere with a diameter of 1.5 mm (red). The thickness values from the phase-retrieval fit well with the theoretical values, differing only at the edges, where the reconstruction produces the 'softer' edges that are typical of this algorithm 25 . Lung and Airways in a small animal model. Figure 4(E) shows an ex-vivo mouse imaged using several projections. The green letters mark the particularly interesting areas in lung and airway imaging, which are the nasal airways (Fig. 4A), the trachea region (Fig. 4B) and the lungs (Fig. 4C). These features are normally barely discernible in conventional absorption imaging, but their visibility can be enhanced using phase-contrast methods. Figure 5 shows the effect of increasing the sample-detector distance, with particularly good edge-enhancement from air/tissue interfaces. The contact images were obtained with sample-detector distances of less than 2 cm. The visibility of the lungs and airways increases by increasing the propagation distance up to 1.6 m, with three different detector setups used here (see methods). Figure 5(A and B) were obtained with the smallest detector pixel size used in this report, 0.65 μm, therefore the contrast of the nasal airways increases remarkably by changing the propagation distance to 30 cm. The edges of the nasal airways are clearly enhanced, as well as the hairs on the nose of the mouse. The visibility of the beads in Fig. 5(C and D) shows that this setup allows us to image the clearance of inhaled debris (modelled by these beads) along the airway away from the lungs, a clearance mechanism which is impaired by cystic fibrosis 19 . Imaging moving samples, like the lungs, also requires short exposure times. Figure 6 shows the lung captured with a propagation distance of 1.5 m and exposure times of (A) 10 s, (B) 1 s and (C) 0.05 s. All images are only flat-field and dark-current corrected. There is no remarkable increase in the quality of the image by increasing the exposure time from 1 s to 10 s. By decreasing the exposure time to 0.05 s, the image gets nosier, but still the quality of the image is sufficient to perform lung motion measurements. Discussion This report shows that a compact synchrotron X-ray source can be used to increase the contrast of low density materials and tissues by simply increasing the sample-detector distance. For a single-material we are able to reconstruct the quantitative thickness of the sample with Paganin's single-distance algorithm. Furthermore the edge-enhancement we obtain is sufficient to render the lungs and airways visible in X-ray imaging. We can adjust the sample-detector distance to achieve a useful fringe width and visibility, taking into account source blurring that occurs at large sample-to-detector distances relative to the source-to-sample distance 26 . For the detector system with the larger pixel size of 6.5 μm and 13 μm we obtained the best contrast for a distance of about 1.5 m. For the smaller pixel size of 0.65 μm, the best contrast was found around 30 cm for the samples we tested. This is consistent with theoretical expectations in that the width of the first phase contrast fringe 26 is comparable to the relevant point spread function of each detector at these distances and the source blurring is matched to the detector blurring 27 . We also see an advantage to the low divergence of the MuCLS. In propagation-based phase-contrast imaging, divergent sources will require a greater sample-to-detector propagation distance to achieve an image displaying the same relative phase effects as captured at a synchrotron. If an experiment is to be transferred from a synchrotron (close to plane wave illumination) to a laboratory source (point source illumination), and the images are to be equivalent (ie. any Fresnel fringes are the same fraction of the sample size in the image), the sample-to-detector propagation distance required at the divergent source is z div = Mz, according to the Fresnel scaling theorem 37 . Here, M = (z div + R)/R is the magnification when the sample is distance R from the source and z is the sample-to-detector propagation distance when using plane wave illumination (well-approximated at the synchrotron, where M is typically within a few percent of 1.0). Since the magnification and propagation distance are co-dependent when considering a point source, inserting the definition of M into the propagation length scaling shows that this requires z div = R z /(R − z). The entire divergent setup will have a source-to-detector size of be at least four times the desired synchrotron propagation distance (z) to achieve an equivalent image. For small animal lung imaging, a propagation distance of z > 1 m is typical at the synchrotron 35 , requiring that an equivalent laboratory setup is at least 4 metres from source to detector. Therefore, if phase effects are sought that are equivalent to those at synchrotron propagation distances of >1 m (typical of many samples >1 cm), it is desirable that the laboratory source is low divergence (like the MuCLS) in order to illuminate both the sample and detector by as much of the available flux as possible in a >4 m long setup. This allows the best use of the available flux for fast propagation-based imaging. Note that if reduced phase effects are sufficient, the setup could be more compact. A low divergence source like the MuCLS enables a source-to-sample distance of several metres, which also contributes to the coherence of the beam, all without significant loss of flux. Another advantage of the MuCLS is the relatively large field of view which allows imaging of a mouse lung with a single shot. This study shows that we are able to use the MuCLS for dynamic respiratory imaging. Previous laboratory-based respiratory studies have shown the ability to detect lung structure differences in live mice using X-ray images with exposure times extending over several breaths 28,29 . Increased diagnostic power should be available via improved spatial resolution if the lungs can be imaged without motion blur. Since mice breathe naturally at around 100 times/min, exposure times of less than 200 ms are desirable 30 . To reduce motion blurring it is also possible to capture images over several breaths and average images from the same point in every breath cycle, as the motion is repeated with every breath 31 , therefore permitting shorter exposures without reducing SNR (provided there is not a time-dependent treatment response occurring over this timespan). This kind of high speed imaging also enables spatially-resolved measurements of functional lung health, for example via X-ray particle imaging velocimetry (PIV) 17 based tracking of lung motion. To capture particles moving along the airway surface, frame rates of less than 1 s are necessary as particles move via normal clearance mechanisms at 2-3 mm/minute. This method can be used to study the effects of a treatment on airway clearance mechanisms 19 . With the MuCLS we are able to capture projections of the mouse lung with exposure times of about 50 ms (at 25 keV). This gives access to dynamical respiratory imaging in the variants discussed above. If we can capture these kinds of biological events, this can help research in better understanding physiology (e.g. aeration at first breath of life 32 ) and in better treating disease (e.g. observing the effects of a treatment 19 ). We conclude that the results of this study show that propagation-based phase-contrast imaging is feasible with the Munich Compact Light Source and that the source fulfills the requirements for future studies of airways and lungs of in vivo small animal models which have previously been primarily limited to large synchrotron radiation facilities. These studies will enable important pre-clinical lung and airway research. Methods Imaging Setup. Imaging was performed at the Munich Compact Light Source. Detailed descriptions of the working principles of the MuCLS are published by the manufacturer Lyncean Technologies, Inc. and in several previous publications 3,4,33,34 . Specimens were located 3.5 meters from the source. The distance between source and sample can be adjusted between 3.2 m and 4.7 m by a motorized linear stage. For all experiments, quasi-monochromatic X-rays at 25 keV were chosen with flux up to 2.4 · 10 9 ph/s and a source size of 39 μm × 45 μm. Propagation distances (sample-to-detector distances) between zero and 1.6 m were used to visualize phase effects. The distances were adjusted by a motor-driven translation stage. Three different detector systems were used to capture the images: (1) A 20 μm thick Gadox scintillator (Gd2O2S:Tb) deposited on a fiber optic plate coupled to an Andor Zyla 5.5 sCMOS camera, with a resulting detector pixel size of 6.5 μm; (2) A 20 μm thick Gadox screen (Gd2O2S:Tb) deposited on a 2:1 fiber optic taper coupled to the Andor Zyla 5.5 sCMOS camera, resulting in a detector pixel size of 13 μm; (3) Conversion of X-rays into visible light by a 10 μm thick LSO scintillator in front of an optical system (Hamamatsu AA50) with a 10x optics (NIKON) coupled to a Andor Neo 5.5 sCMOS camera which leads to a detector pixel size of 0.65 μm. For the lens coupled system longer exposure times are necessary Scientific RepoRts | 7: 4908 | DOI:10.1038/s41598-017-04739-w as the efficiency is lower than for a scintillator fiber-optic coupled detector and the FOV is just 1.66 mm × 1.4 mm, therefore only a small part of the available flux can be used. The FWHM of the point spread functions of the detectors were measured around 50 μm (Zyla camera setup with 1:1 taper), 90 μm (Zyla camera setup with 2:1 taper) and 7 μm (Neo camera setup). Various exposure times (0.05 s-240 s) and propagation distances were chosen in this experiment for useful propagation-based X-ray phase-contrast imaging at the MuCLS. The surface entrance dose was estimated at 4 mGy for a 1 s X-ray exposure of the mouse lung, consistent with synchrotron lung imaging 35 . Image Reconstruction. The projections obtained in this study were all flat-field and dark-current corrected. Propagation-based phase-contrast imaging uses the free-space propagation of a coherent X-ray beam to create contrast. After passing through the sample the wavefront is distorted as a consequence of the phase-shift imposed by the sample. The propagation of the distorted X-ray wavefront gives rise to Fresnel diffraction fringes in the image which enhance edges and interfaces present in the sample 10,36 . This edge enhancement is especially useful for weakly-attenuating samples, which are otherwise not seen in the image. Additionally propagation-based images provide quantitative thickness maps with a single-distance phase-retrieval algorithm. Here the non-iterative phase-retrieval algorithm developed by Paganin et al. has been used 24 . For a single-material sample, the thickness can be reconstructed. All images are cropped from the full detector area to the region of interest. Samples. To demonstrate the edge enhancement effect, weakly-attenuating samples were chosen. The samples for the proof-of-principle experiments were nylon threads and perspex spheres with known diameter and well-known material composition. Eight week-old pathogen-free female C57BL/6N mice (Charles River Laboratories) were used for this analysis. Mouse had free access to water and rodent laboratory chow. Mouse experiment were carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the ethical committee of the regional governmental commission of animal protection (Munich). Images were taken between 1 hour and 48 hours after the death of the mouse and some hair was removed from the neck area.
4,260.2
2017-07-07T00:00:00.000
[ "Physics" ]
Design of Ka band substrate integrated waveguide slot traveling wave array antenna Aiming at the requirements of Missile Radio Fuze Antenna with low sidelobe, narrow beam and specific directional pattern, a design method of Ka band substrate integrated waveguide (SIW) slot traveling wave array antenna is proposed. Based on the slot theory of SIW, the slot parameters and the excitation amplitude of Taylor linear array elements are calculated by MATLAB, and then the slot parameters and the excitation amplitude of array elements are optimized by Ansoft HFSS electromagnetic simulation software to realize the design of Ka band SIW slot traveling wave array antenna. The simulation results show that the Ka band SIW slot quasi traveling wave array antenna has a working frequency of 34.5 GHz, a gain of 14.2 dB, an E-plane beam width of 8° and a beam inclination of 20°, which can meet the requirements of missile radio fuze. Introduction Due to its special operation mode, Missile Radio Fuze often needs the antenna pattern of E-plane wide beam and H-plane low sidelobe narrow beam with specific direction [1][2]. The characteristics of rectangular metal waveguide slot array antenna meet the requirements of radio fuze, but its application range is limited due to its high processing cost, large volume and difficult to be conformal with the surface of projectile. SIW slot array antenna, as an alternative to the traditional rectangular metal waveguide slot array antenna, integrates the advantages of microstrip line and waveguide. It is mainly manifested in high efficiency, compact structure, light weight, small volume, easy integration and processing, etc. Moreover, the substrate integrated waveguide can better adapt to the surface mounted microwave and millimeter wave active devices that need coplanar circuit installation structure, which has important practical value and broad application prospects [3][4]. Structure of SIW Substrate integrated waveguide (SIW) is a relatively new waveguide structure, which has similar propagation characteristics to the traditional rectangular waveguide. Its geometric structure is shown in Fig. 1. The upper and lower surfaces of the dielectric substrate are metal layers, and two rows of metallized through vias are set in the substrate. In this way, a structure similar to the dielectric rectangular waveguide is formed between the upper and lower metal surfaces and the two rows of metallized vias, which is called the substrate integrated waveguide. In the figure, a represents the distance between two rows of metal through vias, b is the width of substrate integrated waveguide, h is the thickness of dielectric substrate, d represents the diameter of metal through vias, and p is the distance between the centers of adjacent vias in metallized via array. The main mode transmitted in the SIW is the TE10 mode, which can be replaced by an equivalent medium-filled rectangular waveguide for electromagnetic characteristic analysis. The substrate-integrated waveguide broadside is given by Eq. (1) [5]: where: = . (5) Fed slot of SIW SIW can also form a linear array antenna fed by traveling wave or standing wave in the waveguide. The amplitude of gap excitation can be controlled by adjusting the offset of gap. When the slot cuts off the surface current on the waveguide wall, the electromagnetic field in the waveguide excites the slot, so that the energy in the waveguide is coupled to the free space and radiated out. The slot of substrate integrated waveguide does not cut the longitudinal current, but is only coupled with the transverse current. It can be regarded as an impedance or admittance element connected to the waveguide, so it can be represented by two terminal parallel elements [6]. The slot form and equivalent circuit are shown in Fig. 2. where, is the wavelength of free space, is the wavelength of waveguide, is the distance between the slot and the center line of the waveguide. Simulation of SIW slot In order to obtain SIW slot array antenna with good electrical characteristics and pattern, its electrical parameters must be obtained very accurately, including slot width ( ), slot length ( ), slot spacing ( ), slot eccentricity ( ) and slot array excitation amplitude distribution. The selection of slot width needs to consider the processing accuracy of the circuit board. The slot resonance length is approximately equal to half wavelength. The slot spacing is determined according to the beam direction, and the wide edge eccentricity needs to be determined according to the amplitude weight of the array segment. Then, the distance [9] of solts please refer to Eq. (8): In this paper, a Ka band substrate integrated waveguide slot array antenna is designed. The working frequency is 34.5 GHz, the dielectric substrate is RT220F, the dielectric constant is 2.2 and the thickness is 0.508 mm. In the design, the dielectric filled rectangular waveguide is used to calculate the initial value, and then the substrate integrated waveguide parameters = 6.5 mm, = 0.8 mm, = 1.2 mm are determined from Eq. (1). According to beam rake angle = 19°, calculated by Eq. (8), gap spacing = 3.92 mm. From the antenna gain of 14 dB, the number of slots is determined to be 16. Slot width 0.4 mm, slot length = 4.3 mm, SIW slot array length is 62.78 mm. The slot array distribution adopts the Taylor weighted array distribution optimized by the improved genetic algorithm [10], and the excitation amplitude is shown in Table 1. simulation pattern of the antenna is shown in Fig. 5 and Fig. 6, and the S-parameter curve is shown in Fig. 7. The simulation results show that the working frequency of the antenna is 34.5 GHz, the working bandwidth is 1 GHz with reflection less than -15 dB, the gain is 14.4 dB, the sidelobe level is -14.9 dB, the beam width of E-plane is 8°, and the inclination angle is 20°, which meets the design requirements. Moreover, when Ansoft HFSS based on finite element method simulates the designed slot array antenna, the coupling between each slot has been considered, and the result is close to the actual value. In engineering implementation, it is necessary to control the machining error and reduce the deterioration of antenna performance caused by machining accuracy. Conclusions In this paper, a simulation optimization design method of K-band SIW slot fed array antenna is proposed. Firstly, the initial value is calculated according to the working frequency and the dielectric filled rectangular waveguide, and the parameters of SIW are determined. The amplitude distribution, conductance and slot offset of Taylor linear array optimized by the improved genetic algorithm are calculated by MATLAB. Finally, the simulation is carried out by Ansoft HFSS. The simulation results show that the improved genetic algorithm is applied to the optimal design of K-band substrate integrated waveguide slot array, effectively controls the antenna sidelobe level, and realizes the working bandwidth of 1 GHz with reflection less than -15 dB, gain of 14.2 db, E-plane beam width of 8°, inclination of 20°, which meets the design requirements and meets the application requirements of missile radio fuze.
1,622.6
2022-06-13T00:00:00.000
[ "Engineering", "Physics" ]
Open AI and its Impact on Fraud Detection in Financial Industry As per the Nilson report, fraudulent activities targeting cards amounted to a loss of $32.34 billion globally in 2021, a 14 % increase from the previous year. Such practices can be combated by harnessing OpenAI’s powerful machine learning and automation capabilities. Such advanced technologies help financial companies avoid any potential fraud and protect their esteemed clients' interests. Through the adoption and utilization of such innovative technologies., financial institutions will be better placed to protect their customers and entities from financial losses. Digital fraudsters are skilful in identifying loopholes and have developed cunning techniques like phishing for unsuspecting victims and wittingly swindling money off them. They are also updated in using OpenAI to develop deceitful information to scam people. This has seen the emergence of names like WormGPT and FraudGPT, reliant on generative AI models used by tech corporations with fraud intents. As a result, fraud detection techniques have to evolve with time as fraudsters progressively devise new techniques that bypass old and rigid banking security protocols and learn how to convince unsuspecting individuals to dispatch their money to them. Introduction Due to its vast implications, fraud is one of man's biggest problems.It refers to intentionally using false details to swindle an organization, individual property or money.The COVID-19 pandemic saw an upsurge in fraudulent malpractices due to the economic recession that most unemployed individuals faced (G.Colvin 2020).The pandemic had rendered thousands of individuals jobless, while those privileged to be working faced cuts in their salaries.Fraud experts say fraud springs from three elements: opportunity, pressure and rationalism (A.Littman 2011).Fraud also occurs when a person develops an unshakeable urge or motive to commit fraud. The likely fraud perpetrator needs to feed an unmet urge with limited resources.These unmet needs continue to grow as they are endless and different among people.They may include gambling debts, reduced household income or burgeoning medical bills.Once the person encounters such unmet needs and has limited resources to meet them, they may opt for fraud.Opportunities are expressed as a lack of internal controls or reckless management within a person that may present fraud as an easy activity.Lastly, the person rationalizes their intent to engage in fraudulent activities by assuring themselves that they urgently needed the money or would eventually pay it back.With harsh economic spells, there is increased motives and pressure to engage in fraud, pushing the fraudsters to rationalize their actions. Fraud is common in finance and comprises money laundering, financial statements fraud, email phishing, cyber fraud, and credit card fraud.The advent of digital banking exposed financial institutions to digital fraud.It therefore becomes necessary to acknowledge that fraud management is essential in the finance docket, though it is an excruciating venture.Digital fraudsters are skilful in identifying loopholes and have developed cunning techniques like phishing for unsuspecting victims and wittingly swindling money off them.As a result, fraud detection techniques have to evolve with time as fraudsters progressively devise new techniques that bypass old and rigid banking security protocols and learn how to convince unsuspecting individuals to dispatch their money to them. Traditional fraud detection approaches within the finance docket are rule-based, meaning that humans make the rules.Most financial institutions use such approaches.As more people opt for emerging digital technologies, fraud scenarios are projected to increase, rendering the existing rule-based approaches unsustainable and unscalable.Additionally, false positives (non-fraudulent practices termed as fraudulent) impose substantial financial losses in terms of customer complaints and transactions in the finance sector. Ciabanu's 2020 study on 1000 adult customers discovered that almost 25 % of the customers whose transactions had been falsely declined turned to competitors for the same financial services (M.Ciobanu 2020).The switch of competitors increased to 36 % for the customers between 18 and 24 years and a further 31 % for those in the 25-34 years old bracket (M.Ciobanu 2020).Such results show the profitability needed for modern fraud detection structures. Financial, banking and fintech industries encounter various scams annually.The scams can be categorized into these classes: digital fraud, physical attacks, internal collusion and violation of the Four Eyes Rule.The last two items involve employee-based schemes or traditional malpractices.Digital fraud, however, involves a range of online fraud activities.Machine learning and automation are needed to combat digital fraud as they have evolved into crucial business tools as fraudsters develop increasingly intricate practices.Such advanced technologies help financial companies avoid any potential fraud and protect their esteemed clients' interests.Through the adoption and utilization of such innovative technologies., financial institutions will be better placed to protect their customers and entities from financial losses. To add to the challenges encountered by the traditional rule-based system, fraudsters need specific patterns and keep changing their hacking techniques.This renders the system cumbersome and quickly 265 Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (Online), Vol. 2, Issue 3, December 2023 obsolete.There arises the need to change this traditional approach in any financial institution.As per the Nilson report, fraudulent activities targeting cards amounted to losses of $32.34 billion globally in 2021, a 14 % increase from the previous year (C.Mullen 2023).With the increase in technological processes in the banking sector and due to the diverse payment channels, such as debit and credit cards and smartphones, the number of digital transactions has increased since 2020.Due to such occurrences, there is a dire need to create more robust and rigid fraud detection solutions in the financing world.The onset of AI has opened a myriad of approaches to adopting and curbing these online malpractices. Technological giants like Google, Facebook, Apple, Amazon and Netflix have also leveraged proprietary AI tools to improve their back-end and front-end financial processes.Currently, they have prioritized using AI in their financial strategies by frequently collecting and using new data to serve through AI models, which has set the bar for the economic world in fraud detection. Fraud Detection in the Finance Industry An article in Javelin Strategy and Research on fraud detection claims that fraud detection in financial institutions uses a brick-and-mortar model, which takes much longer to implement (Pascual et al. 2017).Such long durations could be more welcoming for the financial institutions and their customers since several dire fraudulent malpractices could be affected within these durations.Fraud also hugely affects financial institutions that are involved in online payment services, mainly under the contemporary technological advancement in the business industry.For instance, almost 20 % of customers change financial institutions after encountering fraud (S. Sando 2021).The defection of such members to rival institutions causes financial losses and bad reputations to the victim institution, mainly if the trend becomes recurrent.It, therefore, becomes an essential practice for financial institutions to mount robust fraud detection structures within their systems.There are two major fraud detection approaches, the rule-based approach and the leveraging of OpenAI. Machine Learning-Based Fraud Detection There are hidden and disguising events in user behaviour that lack the clarity of outright evidence to be identified as fraudulent transactions.Machine learning allows for the development of algorithms that can handle massive datasets with several variables and helps identify these hidden behaviours between operator behaviour and the likelihood of fraudulent actions.Machine learning structures are more advanced than traditional rule-based structures due to their fast data processing capabilities and automation tools in data handling.For example, intelligent algorithms are excellent in behaviour analytics as they reduce the number of verification steps needed. Financial institutions are more involved in monitoring the likely occurrence of fraudulent activities as they must identify and communicate any flagged online activity.A research by Villalobos explains a scenario that involved the programming of a machine prototype on a dataset containing transactions that had been criminally executed.The prototype used in the rule-based model helped identify the hidden relations manifested in the transactions and criminal activities.Such machine learning systems reduce the amount of work in smaller financial institutions that undertake fraud monitoring operations.The suggested solution in the article revealed that 99.6 % of money laundering transactions and cut down the reported transactions from 30 % to 1 % (Villalobos and Silva 2017). Machine learning (ML) is built on algorithms, which increase efficiency as the data size increases. The greater the data, the more the machine learning prototype grows more efficient and can differentiate the differences and similarities between different behaviours.The more the machine learning model unearths the differences between fraudulent and legitimate operations, the more its systems grow more efficient in sorting data sets into the needed classes.Machine learning models become more scalable as the customer database increases in size. Although machine learning algorithms have numerous benefits in fraud detection among financial institutions, they carry some drawbacks that limit their application in detecting fraud.For example, one of the main disadvantages is that machine learning needs colossal amounts of data for the models to be accurate.The data threshold is manageable, but there should be ample data points that identify the legitimate causal associations in smaller financial institutions.In addition, machine learning algorithms run on actions, activity and behaviour.The model may overlook clear connections, for instance, a card used in multiple accounts, which could render the fraud detection operation inaccurate. Article Reviews Ayowemi (Awoyemi, et al. 2017) researched why credit card-related fraud detection becomes an impediment and articulated that it occurs for two main reasons.Firstly, fraudulent and regular behaviour changes constantly.Secondly, there is a massive imbalance in the datasets generated from credit card fraud. In the same vein, the approach used to sample the dataset, the selection of variables and the methods used in fraud detection also affect the fraud detection performance in transactions related to credit cards.The research investigated the performance of naïve Bayes, k-nearest Neighbour and logistic Regression on credit card datasets assumed to be largely skewed.The research also uses a hybrid technique that involves the undersampling and oversampling of skewed data.The techniques were applied to the generated data and later transferred to Python.The results demonstrated higher levels of accuracy for k-nearest Neighbour, naïve Bayes and classifiers for the Logistic Regression were 97.92 % and 97.69%.According to the The researchers carried out a comparative study using hybrid machine learning methods that relied on four performance systems of measurements and minimization of class imbalance by deploying the 80-20 undersampling technique in tandem with oversampling.The former sampling technique had a better performance than the latter approach.The research concluded that oversampling leads to poor performance in machine learners (Herland 2018). To add to this pool of research, balanced accuracy (BACC) seemed unreliable as a method of measuring the performance of models in various models and rendered it unable to reflect more realistic alterations observed in other metrics.Therefore, the undersampling technique enhanced learner performance and the supervised approaches turned out better than the hybrid and unsupervised hybrid learners.The provider section contributed to impediments in fraud detection, with somewhat specialized provider categories depicting higher performances than other general categories. Use of Generative AI in augmenting and enhancing fraud detection strategies Generative AI's backbone involves the use of transformer deep neural networks.One such example of generative AI is OpenAI's ChatGPT.Generative AI is constructed to provide data sequence as output and has to be trained using sequential data, like payment histories and sentences.It varies from other methods that produce single categorizations, such as fraud/ not fraud, depending on the given input and training data, that can be provided to the model in any sequence.Generative AI's yield can progress indefinitely, while other classification methods only yield single outputs. Generative AI becomes the superlative tool needed to generate data grounded on actual data synthetically.Its development will depict essential applications in detecting fraud, whereby, as earlier noted, the number of feasible fraud samples remains little and challenging for machine learning to effectively learn from.,A model can apply generative AI and use the existing patterns to develop novel, synthetic samples that pose as actual fraud samples, enhancing the fraud signals for essential fraud detection machine learning tools. An archetypal fraud signal comprises non-fraudulent and fraudulent data.Usually, the non-fraudulent data appears first in the sequence of events and carries the actual behavioural activity of the card's owner. Generative AI can generate such payment sequences and simulate a fraud attack on the card, which would then be used to train data to help fraud detection machine learning tools and enhance their performance. Using Generative AI to detect Fraud One of OpenAI's criticisms is that current models can rely on incorrect outputs.This is a significant flaw that most people in financial institutions are concerned about as they never use public tools like customer chatbots to present made-up, more false information.However, the perceived flaw can be used to generate synthetic fraudulent data since synthetic disparity in synthesized output can develop exceptional fraud patterns, enhancing the end fraud defence model's detection performance. As known to many, repetitive examples of a similar fraud signal do not always enhance detection since most of the machine learning models need a few occurrences of each entity to learn from.The variation in the developed outputs generated from the generative model increases the sturdiness of the end fraud model, helping it spot any fraud patterns in the data and identify similar attacks that would have quickly passed unnoticed if traditional processes were used.This would pose some concerns for fraud managers and cardholders as they may wonder how a fraud model trained on generated data can enhance fraud detection and any merits attached to the exercise. Unknown to them is that before a model is used on live payments, it passes through severe evaluation operations to maintain its projected performance.It is abandoned if it does not attain the expected top-notch performance and replacements are trained until the best models are found.This process is standard and is the norm for all produced machine learning models since models trained using authentic data can also produce substandard results during the evaluation stage. Tools used in OpenAI to effectively detect Fraud in Finance Financial institutions are overwhelmingly shifting to AI to help in efficient fraud detection. Multiple industries including banking, fintech and e-commerce have already adopted fraud detection solutions.Using machine learning algorithms, such industries can now process huge amounts of data and detect suspicious patterns to safeguard the business from fraud. The Use of Logistic Regression in Fraud Detection Machine Learning Algorithms Logistic regression is the supervised learning method supported by definite decisions.All obtained results are categorized as non-fraud or fraud once a transaction occurs.This technique uses a cause-andeffect relationship to generate organized data sets.The regression analysis method is more complex when detecting fraud due to the data set sizes and numerous variables.This algorithm forecasts whether new transactions will be categorized as fraudulent.The models are primarily accurate for clients from larger financial institutions.However, the general models also remain viable and applicable for use.Designing a decision tree discards any unrelated features and does not need wide-ranging data normalization.After a tree is inspected, it becomes clear why some decisions were made by relying on the group of rules initiated by a specific client.The machine learning algorithm output may surface as a model aping the decision tree, giving a possible trace of fraud based on earlier events. Using Random Forest in Fraud Detection Machine Learning Algorithms Random Forest Machine Learning combines decision trees to produce more accurate results.Every tree assesses transactions for various decisions (V.Ayyadevara 2018).Training is conducted on random datasets.Depending on the executed training offered on the decision trees, each tree classifies transactions by deeming them either fraudulent or non-fraudulent.The model is then harnessed to accurately predict the result, allowing fraud detectors to even out errors that may surface in a tree.It improves the overall performance model accuracy and sustains the ability to interpret the results and give explicable scores to the users. Random forest runtimes are fast and can handle unbalanced or missing data.However, they have some weaknesses.For instance, when deployed in regression, they cannot predict past the variety in the training of the data and may provide overfit data sets, often termed as noisy. Using Neural Networks in Fraud Detection Machine Learning Algorithms They emulate the complex nature of the human brain.Financial institutions use it to parse antique databases of preceding transactions, inclusive of those predetermined as fraudulent transactions.Each transaction a neural network processes upsurges its accuracy levels in detecting future frauds and incorporates it into its vast repository of historical information, enabling the model to learn new and existing patterns of habitual fraudsters continually. Neural networks are designed to function similarly to the human brain.They utilize various computation layers.They also use cognitive computing that aids in developing machines that can use selflearning algorithms that involve data mining, processing of natural language and recognition of patterns Deep Learning Mastercard is one of the leading users of AI in preventing card-related fraud.The adoption of AI technology has helped Mastercard reduce occurrences of false declines.Through the leveraging of deep learning models that progressively learn from the organization's 75 billion transactions processed annually across its 45 million locations worldwide, the AI system uses a constantly flowing stream of data and selfsearching algorithms to make its decisions (OpenAI 2023).The results are hugely impressive, significantly reducing fraudulent practices and false declines for Mastercard. Natural Language Processing World leading institutions, such as PayPal, American Express and Bank of New York Mellon, are some of the financial institutions using the power of Natural Language Processing in fraud detection efforts (OpenAI 2023).NLP extracts signals from IVR interactions, voice and chats to enable these financial companies to effectively spot and prevent suspicious fraud due to the technology's capacity to enhance routine detection. Merits of Using AI-Powered Fraud Detection Systems AI-powered fraud detection approaches create a more efficient strategy than the existing traditional methods as they offer intricate fraud pattern detection and real-time analysis and are adaptable to emerging fraud schemes; by reducing the associated time, budgets and false positives, OpenAI will increase the efficiency and accuracy of detecting fraud, causing to decreased financial losses emerging from cybercrimes. From a client's viewpoint, institutions that accurately and efficiently detect fraudulent activities will prevent customers from falling victim to financial fraud.Therefore, institutions that adopt OpenAI will benefit from preventing fraud and increasing customer retention and loyalty. Partnerships between OpenAI and Fintech companies Since its inception, a synergetic partnership between fintech companies and Open AI has existed. The partnership is quickly changing how financial operations are being executed.These Fintech companies are innovative entities now integrating OpenAI to expand the boundaries they can achieve in their financial operations.OpenAI has been used by fintech institutions in the following ways: Spearheading Intelligent Investing Key to this new advent is the growth of robo-advisors, whereby OpenAI's data-crunching abilities equip investors with algorithm-driven and personalized tips (P.Mahajanam 2023).The collaboration between OpenAI's analytical prowess and fintech's accessibility upscales intelligent investing, making intricate strategies accessible to vast audiences. Using Blockchain to Revolutionize Transactions Incorporating OpenAI into blockchain technology has revolutionized how transactions are carried out.OpenAI has advanced skills in understanding complex instructions and has helped streamline and secure agreements using smart contracts.The marriage has set the bar for a future controlled and managed by decentralized and transparent financial operations and increased its efficiency. Enhancing Customer Experience OpenAI has natural language processing techniques that improve customer interactions in fintech companies.Such natural language-reliant tools include chatbots and virtual assistants, which offer a personalized and seamless user experience when powered by OpenAI algorithms.These interfaces have redefined customer engagement by assisting in financial planning and answering queries and have made financial services approachable and user-friendly. Fortifying Security Structures OpenAI and fintech partnership extend to fraud prevention and risk management.Open AI has powerful algorithms that analyze existing patterns in real-time and identify anomalies and other likely frauds with utmost accuracy.This proactive measure protects financial institutions and revamps consumer trust by ensuring that the security and integrity of AI-powered financial services are upheld. Manoeuvring Regulatory Practices As the partnership transcends time, it becomes more essential to navigate regulatory landscapes. Fintech institutions, in collaboration with OpenAI, employ various strategies to comply with the set regulations.The two must balance innovation and concurrently adhere to changing legal frameworks to ensure they remain responsible and promote sustainable growth amongst themselves. Stripe One financial company that has harnessed Open AI's powers in fraud detection is the Irish/American financial services company Stripe.It is among the pioneering OpenAI's GPT-4 users.Stripe facilitates the payment of large and small businesses over the Internet.As the organization develops its ecosystem to support all elements of the payment procedures, developers become their fundamental users. The more accomplished the developers grow while enrolling in Stripe, the more Stripe expands through the digital payments realm and grows the Internet's GDP. The shift to OpenAI began when Stripe summoned a team of 100 staff from its different departments to cease their duties and brainstorm how GPT-4 would optimize old features or develop new ones for the organization.Stripe tasked the 100 employees with dreaming up functionality and features to use in the payment platform using OpenAI's language learning model's newest generation, GPT-4 (Boukherouaa et al. 2021).Stripe's specialists from the onboarding, support and risk sections considered where their institution would leverage artificial intelligence that comprehends free-form images and text and develops human-like responses to either change or improve workflows or features. According to Eugene Mann, Stripe's Applied Machine Learning Team product lead, the company's mere access to GPT-4 helped them realize they had various problems that could be amicably solved using GPT.Mann stated that their primary mission was to discover workflows or products across the organization that would be enhanced using large language models and understand specific areas where the models would work well or struggle in delivering results.Stripe is a familiar user of AI as it used OpenAI's previous sequel technology, GPT-3, to aid its support team in better serving users through services like summarizing a user's query and routing issue tickets. In the initial development process, Stripe's team assembled 50 potential applications to test GPT-4.After vigorous testing and vetting, 15 of the prototypes emerged as strong candidates that would be incorporated into the platform to serve functions that included fraud detection, support customization and answering any questions pertaining to support.Stripe uses OpenAI in the following operations; Seeking Clarity over the Users' Operations To enhance user experience and give the expected support, Stripe has to precisely understand how each of its customers uses the platform and tailor its support accordingly.Although it may seem like an obvious step, it needs long human hours to master and effect. Mann states that most businesses, for instance, nightclubs, keep their websites mysterious and sparse, making it challenging as one searches to discover what is happening on such platforms.However, the advent of GPT-4 has enabled Stripe to scan such websites and provide a summary that vastly outdoes those performed using human skills.Upon hand-checking the results, Stripe realized that humans were wrong, but the model deployed was the right pick.However, GPT-4 has now erased any traces of uncertainty as it produces accurate results.Yet another way Stripe supports developers is through detailed technical documentation and a strong developer support team that answers technical queries or troubleshooting-related challenges.GPT-4 can understand, digest and provide virtual assistance.The technology understands all questions from the user and reads comprehensive documentation for them. Detecting Fraud in Community Platforms The need arises to control harmful or malicious actors.Stripe houses a strong community on forums such as Discord, which not only crowdsources help for niche technical queries but also enhances developers' visibility for upcoming works.However, since it operates online, malicious fraudsters gain access to such forums, mainly intending to access crucial information from community members or obtain credibility with Stripe's community team after being expelled from the platform. GPT-4 becomes helpful in this scenario by evaluating the posts' syntax on Discord and flagging accounts where Stripe's fraud team should investigate and ascertain that it is not a fraudster in disguise. GPT-4 is also helpful in scanning inbound communications and discovering coordinated activities from suspected actors. Future Engagements The Stripe Team is now considering other upcoming features from OpenAI.GPT can be harnessed as a business coach that can interpret revenue models or advise financial institutions on effective strategies. As GPT grows more intelligent over time, its potential areas of applications keep growing. MasterCard Mastercard is another beneficiary of the new AI-powered tool in fighting financial fraud.By using the organization's new AI capabilities and its exemplary network monitoring of accountto-account payments, the new technology helps financial institutions look for any impending fraud. Mastercard has partnered with nine UK banks, including Bank of Scotland, TSB, Lloyds Banks, Monzo, Halifax and now uses large-scale payment data in picking out actual payment frauds before initiating a funds transfer from any account. Organized fraudsters have scammed unsuspecting individuals through a series of assumed mule accounts by disguising them as trustworthy parties.To battle the trend, Mastercard collaborated with United Kingdom Banks to track the flow of funds through these fraud accounts and lock them out.Using insights gained from the tracking activity and supporting them with unique analysis factors like payment values, payee history, account names, payer details and payee's links to accounts linked with scams, Mastercard's AI tool gives banks the needed intelligence to intervene in real-time and thwart any suspicious payment in time.Trustee Savings Bank is among the first beneficiaries of the new revolution.The bank has adopted Mastercard's Consumer Fraud Risk Tool.In its first four months, the bank attests that the new tool has revamped its fraud detection capabilities.Based on its reports, in the U.K., the amounts that would have been saved from scams in a year is £100m [26], should all banks adopt the solution.Other banks have onboarded the process and Mastercard looks to scale the solution to other international markets. As payment and banking security advances, scammers have opted for impersonation tactics to bypass security measures.They aim to convince people or institutions to send them funds, thinking they are legitimate people or entities.Mostly referred to as APP (authorized push payment) fraud, it accounts for 40 % of the United Kingdom's bank fraud losses and it is predicted that it could cost $4.6 billion in the U.K. and U.S alone by 2026 [26].Ajay Bhalla, Mastercard's president of Cyber and Intelligence admits that banks find these scams challenging to detect (Mastercard 2023).He states that customers bypass all set security checks and send the funds themselves, saving criminals the need to breach any security measures. Online fraud erodes customers' confidence in digital financing in a digitally advancing world.Bhalla reiterates that Mastercard's mission is to build and maintain customer trust.Using the new AI technology, Bhalla believes, will help banks identify and forecast any payments linked to fraudsters and stop them early enough. TSB's director of Fraud Prevention, Paul Davis, compares identifying fraudulent payments among the millions of transactions carried out within a day to looking for a needle in a haystack.According to Paul, the new TSB's partnership with Mastercard will give the financial intelligence required to discover fraudulent accounts and deter any payments linked to such accounts. Results generated from banks adopting Consumer Fraud Risk's score reveal massive success in preventing fraud, especially when deployed with various insights concerning customers and their behaviours.This has helped the banks develop fraud strategies that precisely identify various forms of fraud, mainly romance, impersonation and purchase scams.Purchase scams are the leading firms in the U.K., accounting for 57 % of all scams and a significant nuisance for the banks 1.In 2022, the U.K experienced 207 372 cases of authorized push payment scams and incurred losses of up to £485 million [26]. OpenAI as an Advantage for Fraudsters The scene has become two-way traffic as upcoming fraudsters can use OpenAI to stage unsuspected scams on innocent victims.Using OpenAI, scammers can imitate a person's voice and identity and carry out scams on their banking institutions.According to Soups Ranjan, CEO and co-founder of Sardine, a San Francisco-based fraud prevention startup, fraudsters now have access to flawless grammar, similar to a native speaker (Mastercard 2023).Banking customers are often falling victim to more swindles because they are now getting almost perfect disguising text messages. In the new realm of generative AI, deep learning models can curate content based on the information they get trained on.It has therefore become easier for fraudsters to generate video, text or audio that can not only convince potential individual victims but also the programs or software intended to prevent the fraud.The same analogy has resurfaced with the advent of OpenAI.The fraudsters have been long adopters of new technologies as law enforcers struggle to cope.For instance, an article by Churbuck (D. Churbuck 1989) explains how thieves used laser printers and ordinary personal computers to excellently forge cheques to trick the banking institutions, which during those times, had lagged in establishing measures that would detect fake cheques. Generative AI has grown threatening.It could sadly make high-tech fraud prevention technology, for example voice authentication, obsolete.According to a survey conducted by Deep Instinct (Deep Instincts 2023), a New York-based cyber firm, on 650 cybersecurity experts, three out of four sampled cybersecurity experts noted an upsurge in attacks the previous year.85 % of the cybersecurity experts attributed the surge to online fraudsters leveraging generative AI.Customers, in 2022, reported losses amounting to $8.8 billion through online fraud, a 40 % increase from the previous year.This is according to a report obtained from the US Federal Trade Commission reports (J.Mayfield 2023).Massive monetary losses arise from online frauds, but imposter scams have taken centre stage as AI has bolstered them. Fraudsters can harness generative AI capabilities in myriads of cunning ways.If a person often posts on online or social media platforms, the fraudsters can train an AI model to type in the person's style. They can also contact your relatives and implore them to send you funds.More astonishingly, fraudsters can use a short audio sample of a person's voice to convince relatives through impersonation.In extreme cases, they stage kidnappings and ask for ridiculous ransom using the voice.Jennifer Destefano, an Arizonian mother of four, once faced such a predicament and later testified to Congress (U.S. Senate Committee on the Judiciary 2023).Not only are relatives being scammed, but businesses have fallen victim too.Fraudsters have disguised themselves as actual suppliers and crafted deceiving yet convincing emails to accountants claiming immediate payments.They proceed to attach payment instructions for bank accounts they can manipulate.Ranjan, Sardine's CEO, confirms that Sardine's fintech startup customers often fall prey to such scams and lose thousands of monies in such scams.These amounts may seem little compared to the $35 million a Japanese company lost in 2020 after one of its directors' voices was cloned and later used to stage an intricate swindle (T.Brewster 2021).That was a prelude to what was to follow, as AI capabilities now spill to video manipulation, writing and voice impersonation services.These AI tools have become cheaper and more accessible to fraudsters.Much earlier, one needed hundreds or perhaps thousands of photos to curate a high-quality, deep fake video.However, AI can now complete such tasks using a few photos. As financial institutions adapt AI to curb fraud, online crooks are updated on the same as they develop off-the-shelf tools.This has seen the emergence of names like WormGPT and FraudGPT, reliant on generative AI models used by tech corporations with fraud intents. In one fake YouTube video, generative AI helped clone Elon Musk's voice and face hawking a crypto investment prospect that involved a $100,000,000 Tesla-sponsored bargain, which promised to give back double the bitcoin, dogecoin, ether or tether amount the investors would pledge.In the video, Elon was heard appreciating the interested investors and hailed the platform as an online broadcast enabling all cryptocurrency owners to increase their incomes.In this low-resolution video, Musk attributed the lack of clarity to hosting the crypto event from SpaceX.Since the video was fake, innocent and unsuspecting investors would have fallen victim to this scam.Scammers had used a similar 2022 YouTube Video (CNET 2022) he had given while on a SpaceX spacecraft program and impersonated his voice and image.Although YouTube pulled the fake video down, any investor that had sent crypto to any of the issued addresses lost their funds to innovative fraudsters that harnessed the power of generative AI.Musk remains a significant target for impersonations, as numerous audio samples are online to power AI-enabled voice clones. However, these fraudsters can impersonate almost anyone with online audio samples. Voice impersonations are also gaining much use in scamming people through calls.The elderly American population is mainly targeted in this case.Everyone needs to be cautious about incoming calls, even when they are from what seems to be conversant numbers.Victims who have once been scammed find it challenging to trust any incoming calls due to spoofing phone numbers in robocalls, according to Kathy Stokes, director of fraud prevention programs at AARP, a lobbying and services provider with more 277 Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (Online), Vol. 2, Issue 3, December 2023 than 38 million members aged over 50 years (Mastercard 2023).Stokes claims they always suspect emails and text messages sent to them, which has posed severe concerns for their primary communication channels. Another worrying development is the new threat to the already established security structures.For instance, Vanguard Group is a huge financial institution that has given voice recognition utmost priority to its customers.This mutual fund giant serves over 50 million investors and allows its clients to access specific services over the phone by speaking instead of answering security questions.Vanguard Group entrusts the client's voice to be as unique as their fingerprints, according to its 2021 video promotion on YouTube (CNET 2022).The company was rallying its members to sign up for its voice recognition feature in the video promotion.However, the latest exploits in voice-cloning suggest that financial institutions should rethink such practices.Ranjan, Sardine's CEO, claims that she has seen individuals deploying voice cloning to authenticate with unsuspecting banks and.access accounts effortlessly (Mastercard 2023). Large and small businesses that use informal procedures to pay bills or transfer funds are also critical targets for online AI fraudsters.Since time immemorial, fraudsters have been emailing fake invoices to demand payment bills that appear to have been initiated by suppliers.The practice can reach higher levels of deceit as fraudsters can use the AI tools to call the company's employees using cloned versions of the executive's voice and order transactions or ask employees for sensitive information to conduct vishing or voice vishing attacks.According to Rick Song, CEO at Persona, impersonating an executive for highly valued fraud remains one of the biggest fears of voice recognition security measures (Mastercard 2023). Criminals continuously use generative AI to engage fraud-prevention specialists assigned to thwart these threats in the digital finance system.Fraud prevention specialists must verify that the customers are who they claim to be to safeguard the institution and the customers from losses from online fraud.One of the ways that fraud-prevention businesses like MiTek, Socure and Onfido verify their users is by use of liveness checks.This method requires the user to take a video or selfie photo and the fraud prevention specialists use the elements to match the live face with the face found in the ID, which the user is also prompted to submit.When they understand how the system operates, online fraudsters purchase images of driver's licenses on the dark web and use the now cheaper and available video morphing programs to superimpose the real faces onto theirs.The program also allows them to talk and move their heads behind real people's digital faces, maximizing the chances to bypass the liveness checks. There has been an upsurge in the generation of fake faces.The fake faces are high-quality and used to mount automated attacks to impersonate liveness checks.The upsurge varies according to the industries, but the previous years have recorded significant cases.Crypto and Fintech companies have experienced the highest number of impersonated liveness check attacks.Fraud experts reported to Forbes that they suspected that well-established verification providers such as MiTek and Socure have experienced their fraud prevention metrics degrade due to the attacks.Johnny Ayers, Socure's CEO, points out that some clients need to be more active in adopting the firm's new models, which could adversely affect their performance (Mastercard 2023).Ayers also pointed out a top bank behind four versions, citing the dangers posed to the financial institution. MiTek failed to comment on its performance metrics.However, Chris Briggs, its senior vice president, claims that if a particular model were developed some months ago, it would be argued that the older model performed at lower levels than the newer models.The vice president stated that the firm's models undergo vigorous training and retraining trained and retrained using lab-based and real-life data streams. Wells Fargo, Bank of America and JPMorgan failed to remark on the impediments they faced with generative AI-powered online scam.One spokesperson from Chime, America's most prominent digital bank and a victim of significant fraud problems, also claims that the institution has not recognized any upsurge in generative AI-powered fraud attempts (Vanguard 2021). Online fraudsters behind the growing financial scams vary from solo individuals to well-organized groups made of hundreds of tech gurus.The organized online groups work in multi-layered structures and have adept members, with data scientists onboard.They own their command-and-control centers.Some members are only tasked to identify leads by phishing phone calls and emails.When their phishing attacks get an unsuspecting customer, they hand them over to the next colleague in line, who masquerades as a bank branch manager and attempts to persuade the victim to transfer the money from the account.One of the critical steps in the scamming process involves persuading the victim to install a program like Citrix or Microsoft TeamViewer that allows them to control their computer.With such levels of control on the victim's computer, the online fraudsters further stretch to carrying out more purchases and withdrawing funds to other addresses. OpenAI has attempted to develop precautions that hamper people from using ChatGPT to commit fraud.For example, the model immediately declines any attempt to initiate ChatGPT to ask an individual for their account number.OpenAI recognizes fraudsters' possible misuse of the platform and has a safety and misuse policy on its website that reads.There is no silver bullet for responsible deployment, so we try to learn about and address our models' limitations and potential avenues for misuse at every development and deployment stage."(Haverstock and Kauflin 2021). Meta, on the other hand, released a language model, Llama 2, which is even easier to use for advanced criminals due to its open-source nature, which displays all its deployed codes.This expands the possible number of ways online fraudsters can tailor it to their advantage and spell doom on unsuspecting 267 Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (Online), Vol. 2, Issue 3, December 2023 comparative results, k-nearest Neighbour outperformed the naïve Bayes and Logistic Regression algorithms.Research carried out by Bauder et al. focused on the alleged fraud experienced at Medicare.The research compared some of the machine learning methods leveraged while identifying fraud at Medicare. 269 Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (Online), Vol. 2, Issue 3, December 2023 This AI version creates a graphic illustration of a decision-making process.They are useful tools in fraud detection, as they did in identifying the most crucial variables that led to fraud and developing a framework used in identifying fraudulent transactions.Decision Tree algorithms come into play while classifying atypical activities in any transaction an authorized user initiates.The algorithms house trained constraints that are essential tools in fraud classification on the dataset.The algorithms are used in the regression or classification extrapolative modelling challenges.They are fundamental rule sets designed to use fraud allegations involving clients. (D.Graupe 2016).Neural networks pass through multiple layers during the data training process.They however, give more accurate results than other models since they use cognitive computing and learn from the patterns of authorized behaviour.They are therefore able to distinguish between non-fraud and fraudulent transactions.They blend into the change in the behaviour of what is assumed to be standard transactions and identify types of fraudulent transactions.Neural networks are fast and function in realtime. 273 Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (Online), Vol. 2, Issue 3, December 2023 Mastercard adopted the use of AI technology in the last decade.Today, AI has evolved into a foundational technology deployed all over Mastercard's operations and has become a game-changer in identifying fraud patterns.The new AI-powered cybersecurity solutions have saved over $35 billion in fraud losses in the last three years [26].It uses AI to help banks envisage upcoming frauds in real-time before any funds are transferred from a victim's account.If all U.K banks successfully adopt the new technology, the Trustee Savings Bank predicts a decrease in scam losses of up to $100 million [26].Ranging from simple enticing scams to fictitious online frauds, impersonation scams of various forms have hurt businesses and individuals over recent times and reduced the confidence of those yet to be scammed.However, the situation is changing as financial institutions like Mastercard have reinforced the fight against online fraudsters using a new Consumer Fraud Risk Solution.
9,475.2
2024-01-01T00:00:00.000
[ "Computer Science", "Business" ]
Employment status and differences in physical activity behavior during times of economic hardship : results of a population-based study International Journal of Medical Science and Public Health | 2016 | Vol 5 | Issue 01 (Online First) 1 International Journal of Medical Science and Public Health Online 2016. © 2016 Gloria Macassa. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license. Research Article Introduction to experience increased unemployment. [3,4]It is suggested that economic recession can impact on an individual's health behaviors, including the engagement in physical activity. [5]oreover, across OECD countries and in Europe, empirical evidence has shown that unemployed people experienced high mortality [6] and poor physical and mental health. [7]However, others have found contrary results regarding this relationship. [8,9]n the majority of the studies, unemployment has been found to be detrimental to health behavior. [6,10,11][14][15] Several hypotheses have been put forward by different scholars on the potential links between economic conditions and physical activity.Some scholars suggest that, during an economic crisis, people experience excessive financial and psychological stresses, which can contribute to a sedentary lifestyle and decreased level of physical activity. [1]Furthermore, it is believed that both the physical and social environments play an important role, as neighborhoods deteriorate during recessions, which in turn discourages residents from engaging in physical activity. [16]Moreover, others state that the relationship between economic conditions and physical activity is related to time use-a reduction of hourly wages during recessions (or even the absence of paid work options) lowers the incentive for people to increase leisure-time activities including physical activity. [15]But, the relationship among economic recession, unemployment, and physical activity remains a subject of continuing debate.For instance, using individual-level data from the Behavioral Risk Factor Surveillance System 1987-2000 waves, Ruhm [17] found an increase in physical activity when state labor market conditions worsened.In addition, Charles and DeCicca [18] observed no association between unemployment rate and physical activity among respondents from high-risk jobs. In Sweden, few studies have accessed the relationship between employment status and physical activity in times of recession.In addition, available studies have found mixed results. [19,20]For instance, Lindström et al. [19] reported that self-employed male and female pensioners showed a significantly increased risk of low-leisure time physical activity, but found no difference in odds among skilled and unskilled manual workers compared with high-level nonmanual employees.They also argued that some of the socioeconomic differences in leisure-time physical activity were owing to differences in social capital between socioeconomic groups. [19]he context of this study is Gävleborg County, in East Central Sweden, where the recent economic crises caused increased levels of unemployment compared with the national average.At its peak, the rate of unemployment among people aged 16-84 years was 13.8% in 2010 compared with 7.0% for the national average. [21]The massive unemployment experienced by the county has been blamed on the closures of large industries and small companies distributed across the county.Furthermore, the County of Gävleborg showed the lowest percentage of highly educated people among those economically active in the entire country (29% in 2010). [22]egarding lifestyle of the county's population, it was found that in 2010, 39% of the population was not physically active (as measured by 30 min of physical activity per day) and 15% did not engage in any physical activity.In addition, 16% of the population was considered obese with a body mass index (BMI) of 30 kg/m 2 and more, and only 30% ate at least one fruit a day. [22]Although the economic crisis affected the county severely, to our knowledge, no study has attempted to investigate its impact on people's health behaviors and, specifically, regarding physical activity.Therefore, this study aims to investigate differences in physical activity by employment status among people residing in Gävleborg County in 2010. Study Design and Procedure The subjects of this study come from the Health in Equal Terms (HET) Survey, a cross-sectional study carried out in 2010 in Gävleborg County.The sample selection was carried out by Statistics Sweden, and the sampling frame has been created based on the Total Population Register, which consists of all the registered residents within the county between the ages of 16 and 84 years, in total 221,618 individuals. Inclusion Criteria All people aged 16-84 years residing in Gävleborg County during 2010 were included. Exclusion Criteria People who were aged 16-84 years who did not return the post mail questionnaires or did not answer the Web questionnaires at the time of the survey in 2010 were excluded. A sample of 11,977 individuals was selected with approximately 1,000 individuals by municipality and 600 individuals in Gävle Municipality, the administrative center of the county.Overall, 5,983 persons aged 16-84 years completed the questionnaire resulting in a response rate of 50% [Table 1].However, for this study, only people in economically active ages (16-65 years) were included (4,245). The survey was carried out by the Swedish National Institute in collaboration with Gävleborg County Council, and data collection was conducted by Statistics Sweden.Respondents were able to participate in the survey by completing either a postal or a Web questionnaire.Information was enclosed with the questionnaire that was sent to the selected individuals regarding the study's background and objectives, how the answers would be used, and data would also be retrieved from the Total Population Register (for variables such as education, income, and taxation).The questionnaire collected background demographic and socioeconomic variables, variables regarding health, lifestyle, economic conditions, labor market, and employment and regarding security and social relationships. Measurement of Variables Outcome Variable In this study, the outcome variable was physical activity.Physical activity was measured with the question: "How much have you moved and exerted yourself physically in your spare time during the past 12 months?"In the survey, the answers were divided into three categories: "low," "moderate," or "vigorous" physical activity.For the purposes of this study, the variable was dichotomized using a "yes" and "no" format.Those who reported moderate to vigorous physical activity were coded yes, and those who reported none to low physical activity as no. Independent Variables The main independent variable in this study is employment status.In the survey, employment status was assessed with one question, "What is your current main job?"The answers were dichotomized into two categories; employed and not employed.The employed group included people who were employed and those on parental leave (with employment).The group of not employed included the unemployed, students, and those who were inactive, such as the elderly people and those with disability. Education was assessed by using Statistics Sweden's educational register from 2009.The classification is made for the person's highest level of education according to Swedish educational nomenclature (SUN) 2000.For this study, three levels of education were created: primary school or similar, secondary school/similar, and university/similar. Income data were collected from income and taxation registers (related to 2008) as total individual income, and three groups were created: (a) low income, <250,000 SEK, (b) medium income, 250,000-750,000 SEK, and (c) high income, >750,000 SEK a year. Social support was measured with the question: "Do you have someone you can share your deepest feelings with and confide in?"There were two possible answers that distinguish those with social support (yes) from those without social support (no). Self-rated health was assessed with the question "How do you rate your general health?" with the options "very good," "good," "neither good nor poor," "poor," and "very poor."In the analysis, the categories very good and good were combined as good health, and the categories neither good nor poor, poor, and very poor were combined as poor health. Smoking habits were assessed by the following questions: (a) "Do you smoke daily?"(b) "Does it happen that you smoke every now and then?" and (c) "Have you before smoked daily for at least six months?"Each of the questions could be answered with yes and no.In this study, smoking habits were divided into three groups: daily smokers, individuals who had stopped smoking, and those who had never smoked. Risky consumption of alcohol was assessed by three questions: (a) "How often have you drunk alcohol in the past 12 months?"(b) "How many "glasses" (example was given) do you drink on a typical day when you drink alcohol?" and (c) "How often do you drink six "glasses" or more on the same occasion?"A new composite variable was used for this study and was categorized as yes (risk consumption) and no (no-risk consumption). Statistical Analysis The analyses consisted of descriptive statistics (frequencies) and weighted logistic regression analysis, which was carried out fitting four different models.The first model (model I) included the relationship between employment status and physical activity only.The second model (model II) added demographic variables; model III added health-related variables.The final model (model IV) added socioeconomic variables.The results of the logistic regressions are present as odds ratios (OR) with 95% confidence intervals (CIs).All analyses were carried out using SPSS software, version 20. [23] Ethical Approval This study was approved by the Ethics Committee of the Swedish National Institute of Public Health and the Regional Ethical Committee in Uppsala and performed in compliance with the Helsinki Declaration.Verbal informed consent was obtained from all participants. Result The distribution of the variables included in the study sample is presented in Table 2.There were 57.2% (n = 2,427) subjects who reported no or low physical activity and 41.7% (n = 1,770) with moderate to vigorous physical activity.The 3].In addition, people who were not employed with primary school or similar education and those with an annual income lower than 250,000 SEK revealed increased odds of low physical activity of 1.65 (95%CI: 1.65-2.01)and 1.74 (95%CI 1.37-2.22).Furthermore, results showed that people who were out of work and reported poor health showed higher odds of physical inactivity of 2.21 (95%CI: 1.91-2.56)[Table 3 model IV]. Discussion This study has found a relationship between employment status and physical inactivity, and the relationship was explained mainly by health-related and socioeconomic factors.Similar results have been found in other studies [24,25] from a variety of contexts, in times of relative economic prosperity.For instance, a study concerning correlates and predictors of physical activity over time observed that income and education showed a strong and consistent positive effect on physical activity. [25]Another study carried out in the United Kingdom found that people with upper secondary school-level qualifications or above were more likely to take part in regular exercise, whereas, those with lower second-level education or less were five times less likely to play sports than those with third-level education. [26]In addition, Owen et al. reported that adult participation in physical activity was influenced by a range of personal, social, and environmental factors and those individual level variables such as socioeconomic status and perceived self-efficacy demonstrated the strongest association with physical activity behavior. [27,28]However, other studies have reported contradictory results.For instance, a study concerning the relationship between physical activity and socioeconomic status reported that the statistical significance observed was totally eliminated when physical activity was conducted around the home and at work, which would indicate the importance of the context where the activity took place. [29]In a recent study, going beyond just the relationship between income and physical activity, Hyytinen and Lahtonen [30] found that long-term income of physically active male subjects was approximately 14%-17% higher than that of less active male subjects.Unemployment is a permanent stress situation requiring the person to adapt.The unemployed persons are in a very difficult situation, exacerbated by their social marginalization and own sense of failure, in turn, leading to feelings of worthlessness. [9]ny argue that the stress experienced by unemployed people might be the underlying cause of unhealthy behavior.The stress hypothesis stipulates that job loss causes psychological distress and unhealthy behavior. [9][33][34] Moreover, job loss affects health behavior through an income effect. In our study, controlling for socioeconomic variables such as education, income, and other health variables (smoking and self-reported health) eliminated the statistically significant relationship between employment status and physical inactivity.In 2010, the time the data for this study was collected, there was a massive loss jobs (with income loss), which might have caused financial strain with possible adverse effects in health behaviors (including physical activity).Furthermore, a further indication of the role of income is seen in Table 2 where respondents who were not employed with an annual income lower than 250,000 SEK a year revealed increased odds of physical inactivity of 1.74 (95% CI: 1.37-2.22)[Table 3].We also found that age, gender, marital status, and social support were not associated with physical activity.However, studies carried out elsewhere have found different results.In relation to gender and age, some studies have found that participation in physical activity was consistently higher in men than women and inversely associated with age. [33]In addition, mixed results have been reported in regard to the relationship between physical activity and marital status. [33]oncerning social support, several studies have found it to be associated with levels of physical activity. [35,36]For instance, Resnick et al. reported that friends support indirectly influenced exercise through self-efficacy and outcome.In the view of the authors, interventions aimed to improve exercise behavior in adults and especially older adults should incorporate social support to strengthen self-efficacy expectation outcomes. [36]e found that smoking was not statistically significantly associated with physical activity.However, other studies have reported that, compared with inactive individuals, physically active individuals smoked fewer cigarettes and were more likely to be nonsmokers or occasional smokers. [37,38]The differences in results regarding the relationship between smoking and physical activity might be related to the fact that the studies carried out in Cyprus and Greece assessed populations of adolescents and young adults (19-30 years of age) when compared with 16-65 years of age, which was the target age in our sample. Furthermore, our study did find a relationship between self-reported health and physical activity [Table 3].This result is in line with findings from a European study carried out in 2004 in 15 member states.The authors reported a positive relationship between physical activity and self-rated health across population subgroups as divided by age, gender, income, and educational attainment. [39]However, Ker et al. [40] observed that older adults who were physically active outdoors accumulated significantly more physical activity, but self-rated health was not significantly greater than those being physically active indoors.BMI was not associated with employment status (results not shown). Strengths and Limitations This study used a cross-sectional design, which makes it difficult to estimate causal relationships between unemployment and physical activity and their direction.In addition, some authors have stressed the difficulties of measuring physical activity, which in turn can cause problems with inter pretation of findings. [41]In addition, the response rate in the study was around 50%, which is similar to response rates in Swedish population-based surveys. [42,43]However, the response rate is unlikely to have influenced the results. As mentioned earlier, Statistics Sweden collected the data and applied population weightings to estimate prevalence at the population level.These weightings were added with help of information from the registers of the total population of Gävleborg County.45] However, the collection of data in the Health in Equal terms survey is of high quality, even if physical activity was based on people's self-report. [42,43]8] Conclusion This study found an association between being not employed and physical inactivity.The association was explained by health and socioeconomic factors.In addition, the study found a relationship between low education, low income, and poor self-rated health with physical inactivity.However, longitudinal studies are warranted to further disentangle potential mechanisms behind the observed association between employment status and physical (e.g., the effects of neighborhoods and availability of leisure infrastructures).From the policy perspective, our study suggests the need to promote physical activity during times of high unemployment in order to foster better health behaviors. Table 1 : Questionnaire flow in the Gävleborg County's Health in Equal Terms Survey 2010 Table 2 : Sample and percentage distribution of the individual variables included in the analysis, Health in Equal Terms SurveyGävleborg, 2010 Table 3 : Odds ratios (ORs) with 95% confidence intervals (CI) of the relationship between employment status and physical inactivity, Health in Equal Terms Survey,Gävleborg, 2010
3,764.6
2016-01-01T00:00:00.000
[ "Economics", "Medicine" ]
ANALYSIS OF A MATHEMATICAL MODEL RELATED TO CZOCHRALSKI CRYSTAL GROWTH This paper is devoted to the study of a stationary problem consisting of the Boussinesq approximation of the Navier–Stokes equations and two convection–diffusion equations for the temperature and concentration, respectively. The equations are considered in 3D and a velocity–pressure formulation of the Navier–Stokes equations is used. The problem is complicated by nonstandard boundary conditions for velocity on the liquid–gas interface where tangential surface forces proportional to surface gradients of temperature and concentration (Marangoni effect) and zero normal component of the velocity are assumed. The velocity field is coupled through this boundary condition and through the buoyancy term in the Navier– Stokes equations with both the temperature and concentration fields. In this paper a weak formulation of the problem is stated and the existence of a weak solution is proved. For small data, the uniqueness of the solution is established. Introduction In this paper we investigate the solvability of a stationary three-dimensional mathematical model describing the processes in the melt during a silicon single crystal pulling by the Czochralski method.The main feature of the Czochralski method (cf.e.g.[3,13,15]) is that the grown single crystal is pulled from its melt which is situated in a crucible (cf.Fig. 1).The device is rotationally symmetrical and, during the pulling, both the crucible and the crystal perform rotational motions around the symmetry axis and, at the same time, motions in the vertical direction which correspond to the crystal growth velocity and maintain the melt free surface in a constant position.The crystal growth velocity is usually considerably smaller than the typical Cross-section through the melt and crystal in a Czochralski apparatus. velocities in the melt.The crucible is made of vitreous silica which leads to a contamination of the melt by oxygen. The region occupied by the melt is assumed to be known a priori and will be denoted by Ω in the following.The boundary ∂Ω of Ω comprises the crucible wall Γ W , the melt free surface Γ LG and the interface Γ LS between the melt and the growing crystal (cf.Fig. 1).The mathematical model then consists of the following partial differential equations defined in Ω and boundary conditions prescribed on ∂Ω: We use the following notations: v is the velocity, p is the pressure, θ is the temperature, c is the oxygen concentration, I is the identity tensor, n is the unit outward normal vector to ∂Ω, and ∇ s denotes the surface gradient (in this paper, ∇ s • = (I − n ⊗ n) (∇•)| ∂Ω ).The constants α 1 , . . ., α 7 and the functions f 1 , f 2 , ϕ 1 , ϕ 2 , v b , θ b , c b will be specified in the next section. The equations (1.1)-(1.4)can be derived from general balance laws for linear momentum, mass, energy, and mass of a constituent, respectively.The equations (1.1) and (1.2) represent the Boussinesq approximation of the Navier-Stokes equations.Among the boundary conditions, the most interesting one is the boundary condition (1.6) describing the so-called Marangoni effect, i.e. the fact that surface tension variations due to temperature and concentration gradients induce tangential surface forces on the melt free surface.A detailed derivation and explanation of the model (1.1)-(1.8)can be found in [8].The solvability of a rotationally symmetrical case of (1.1)-(1.8)defined in a simplified geometry and taking into account magnetic forces was investigated in [4].The formulation of (1.1)- (1.8) used in [4] requires to apply weighted Sobolev spaces, but on the other hand, it considerably simplifies the treatment of (1.2) and (1.6). Remark 1.1.The constants α 1 , . . ., α 7 have the following physical meanings: Re Pr , where Re is the Reynolds number, Pr is the Prandtl number, Sc is the Schmidt number, Gr is the Grashof number (Gr θ for temperature, Gr c for concentration), and Ma is the Marangoni number (Ma θ for temperature, Ma c for concentration).The usual definitions of the functions f 1 , f 2 , ϕ 1 , ϕ 2 are for θ and c belonging to some bounded intervals (θ 1 , θ 2 ) and (c 1 , c 2 ), respectively.Outside these bounded intervals, the functions f 1 , f 2 , ϕ 1 , ϕ 2 have no physical meaning and can be defined arbitrarily.In the above relations, e g is a unit vector in the direction of the gravity, Bi is the Biot number, Rd is the radiation number, and k 0 is the segregation coefficient. Let us mention the most important difficulties arising when investigating the weak solvability of (1.1)- (1.8).The first difficulty comes from the fact that full Dirichlet boundary conditions are prescribed only on a part of the boundary whereas, on the remaining part, only the normal component of the velocity is known a priori.That also causes difficulties in case of the Navier-Stokes equations since all methods used in the literature for proving their solvability are based on the assumption that there exists a divergencefree function v b satisfying the Dirichlet boundary conditions for the velocity such that the integral Ω v b •(∇ v) v dx is, in some sense, small (cf.e.g.(4.29), (4.35)).It is known that such assumption is fulfilled if full Dirichlet boundary conditions for the velocity are prescribed on the whole boundary and the flux through each connected component of the boundary vanishes (cf.[7,p. 287,Lemma 2.3]).However, in the case considered here, the existence of a suitable function v b is an open problem.That means that, in case of nonhomogenous mixed boundary conditions of the mentioned type, which often occur in various applications, it is not known how to prove the solvability of the stationary incompressible Navier-Stokes equations in general and it is rather surprising that such a fundamental problem still remains unsolved. Another difficulty consists in the nonstandard boundary condition (1.6) for the velocity on Γ LG .Here, a suitable generalization in form of a linear functional defined on a subspace of H 1 2 (∂Ω) 3 has to be constructed for the occurring surface gradients of temperature and concentration.This generalization should be also appropriate for a numerical solution of (1.1)-(1.8)by means of the finite element method. Finally, the investigations of (1.1)-(1.8)are complicated by the coupling between the equation (1.1) and the equations (1.3) and (1.4) which is realized through the buoyancy terms in (1.1) and through the boundary condition (1.6).In Theorem 4.5, we shall prove that the solvability does not depend on the magnitude of the constants in the coupling terms.Therefore, it suffices to prove the solvability only for those cases when these constants are small, which is, of course, much easier. The plan of the paper is as follows.In Section 2, we formulate assumptions on the problem (1.1)-(1.8)and introduce some notations.In Section 3, we construct the mentioned generalization of the surface gradient, derive a weak formulation and show the equivalence between classical solutions and smooth weak solutions.In Section 4, we establish an equivalent operator formulation and prove the solvability of the weak formulation by applying the Leray-Schauder principle.The Leray-Schauder principle allows us to make a weaker assumption on the Dirichlet boundary condition v b for the velocity than usually made in the literature.Further, we prove that, for small data, the weak solution is unique. Remark 2.1.The assumption on the existence of the extension m is made because of the treatment of the surface gradients in the boundary condition (1.6).It can be shown that this assumption is satisfied if Γ LG is a C 1,1 surface and if there exists a finite number of local Cartesian coordinate systems providing a description of ∂Ω (cf.[10, p. 14]) with the property that, in each of these coordinate systems, the projection of the respective part of Γ LG into the (x 1 , x 2 )-plane is a set with a Lipschitz-continuous boundary.In this case, we even have m ∈ W 1,∞ (IR 3 ) 3 .Another sufficient condition for the existence of m ∈ W 1,∞ (IR 3 ) 3 is the existence of a domain Ω with a C 1,1 boundary satisfying Γ LG ⊂ ∂ Ω. We make the following assumptions on the data of the problem (1.1)-(1.8): ) ) ) Generally, the condition v b • n ≥ 0 on Γ LS is a technical assumption needed for proving both the existence and the uniqueness of the weak solution.In the case of the Czochralski method, however, this assumption is a natural condition which expresses that the crystal is really growing and not melting. Weak Formulation To derive a weak formulation of (1.1)-(1.8),we first assume that the functions v, p, θ and c are a classical solution of our problem.We multiply the equations (1.1)-(1.4)by arbitrary functions , respectively, integrate them over Ω, use the identities apply the Gauss integral theorem (cf.[5, p. 33]) and substitute the Neumann boundary conditions and the condition (1.2).Then we obtain ) ).However, this new formulation makes it possible to introduce more general solutions in a natural way.In fact, all terms with the exception of the two last ones in (3.5) are well defined for functions v, w, p, λ, θ, η, c, q belonging to the Sobolev spaces H 1 (Ω) 3 , L 2 (Ω) and H 1 (Ω), respectively.Therefore, it remains to generalize the surface integrals of the type [10, p. 72, Theorem 3.8] and [10, p. 69, Theorem 3.4]), the terms in the square brackets in (3.9) are elements of L 2 (Ω) 3 and hence the mapping d is well defined.Using the Hölder inequality, we obtain and hence it follows from the Sobolev imbedding theorems that which implies the continuity of d. Let us assume for a moment that m ∈ C ∞ (Ω) 3 .Then we have for any According to the trace theorems (cf.[10, p. 84, Theorem 4.2]), the righthand side of (3.13) is defined for any and H 1 (Ω), we obtain the property (3.10).Now, due to (3.10), we have The constant Thus, we have which means that the mapping d s is continuous.By (3.10), we obtain Using the density of H 2 (Ω) into H 1 (Ω), we observe that, for any ζ ∈ H 1 (Ω) and w ∈ W, the value <d s (ζ), γ(w)> is independent of the choice of m.Since, for any w ∈ W, we have n According to the Friedrichs inequality (cf.[10, p. 20, Theorem 1.9]), | • | 1,Ω is a norm on W, equivalent to • 1,Ω , and hence the constant C s is finite by (3.12). Thus, we see that the functional d s (ζ) is a reasonable generalization of the surface gradient on Γ LG for functions ζ ∈ H 1 (Ω) and hence, replacing the last two integrals in (3.5) by −α 3 <d s (θ), γ(w)> − α 4 <d s (c), γ(w)>, we can define a weak formulation of (1.1)-(1.8).First, however, let us introduce the following notations: Then the functions v, p, θ, c are a weak solution of the problem (1.1) and The reason is that, for a plane Γ LG , any functions v, w ∈ C 2 (Ω) 3 with v • n = w • n = 0 on Γ LG satisfy w • (∇v) T n = 0 on Γ LG , which makes it possible to use the identity instead of the identity (3.1).Let us mention that, for a plane Γ LG with a normal vector n, the formula (3.9) can be simplified to The following two theorems show that the weak solution really is a meaningful generalization of the classical solution. Since C ∞ (Ω) 3 is a dense subspace of H 1 (Ω) 3 , we deduce that (3.24) holds for any w ∈ H 1 (Ω) 3 2 Particularly, (3.25) holds for any w ∈ C ∞ 0 (Ω) 3 with a vanishing right-hand side and since the terms in the square brackets are continuous, we infer that the functions v, p, θ, c fulfil the differential equation (1.1) in the classical sense.Then it follows from (3.25) that Existence and Uniqueness of the Weak Solution In this section, we investigate the existence and uniqueness of the weak solutions of the problem (1.1)-(1.8).First, in Theorem 4.1, we show that the pressure can be eliminated from the weak formulation (3.19)-(3.23)and we can confine ourselves to investigations for the functions v, θ and c.Then we construct an operator formulation which enables to perform a proof of the weak solvability for small values of the constants α 1 , . . ., α 4 applying the Leray-Schauder principle.Using a simple scaling argument, we extend this existence result to arbitrarily large constants α 1 , . . ., α 4 .Finally, we show that the weak solution is unique for small data. and Then there exists a unique function p ∈ L 2 0 (Ω) such that the functions v, p, θ, c are a weak solution of the problem (1.1)- (1.8). Lemma 4.1.The spaces V, Θ and C are separable Hilbert spaces for the scalar products a 1 (•, •) and a 2 (•, •), respectively, and the space X is a separable Hilbert space for the scalar product where U = (v, θ, c), W = (w, η, q) and U, W ∈ X.The norm U X ≡ (U, U) X is equivalent to the norm v 1,Ω + θ 1,Ω + c 1,Ω , i.e., there exist constants C 1 and C 2 depending only on Γ W , Γ LG and Ω such that For any sequence Proof.Using the Friedrichs and Korn inequalities (cf.[10, p. 20, Theorem 1.9] and [11, p. 97, Lemma 3.1]), we deduce that the bilinear forms a 1 and a 2 are scalar products on the respective spaces and that the norms induced by these scalar products are equivalent to the norm • 1,Ω .Thus, for proving that the spaces V, Θ and C are Hilbert spaces for the mentioned scalar products, it suffices to show that these spaces are closed subspaces of H 1 (Ω) 3 and H 1 (Ω), respectively, with respect to the norm • 1,Ω .Consider any u ∈ H 1 (Ω) 3 and assume that there exists a sequence which implies that div u = 0 (cf.[10, p. 56, Proposition 1.1]).The continuity of the trace operator gives γ(u n ) → γ(u) in L 2 (∂Ω) 3 and also γ( Since H 1 (Ω) 3 ⊂ V and H 1 (Ω) ⊂ Θ , C and since, by the Hahn-Banach theorem (cf.[12, p. 186, Theorem 4.3-A]), any functional belonging to V , Θ or C can be extended to a functional belonging to H 1 (Ω) 3 or H 1 (Ω) , respectively, we obtain the equivalence (4.8). In the following lemma, we shall use the fact that, according to the Sobolev imbedding theorems and the trace theorems (cf.[10, pp.69 and 84]), there exist finite constants which depend only on Ω. Lemma 4.2.The trilinear form b 1 is continuous on H 1 (Ω) 3 3 and satisfies the inequalities The trilinear form b 2 is continuous on H 1 (Ω) 3 ×H 1 (Ω)×H 1 (Ω) and satisfies the inequalities Proof.Using the imbedding H 1 (Ω) ⊂ L 4 (Ω) (cf.[10, p. 69]) and applying the Hölder inequality, we obtain (4.9).For proving the inequalities (4.10) and (4.11), we use the identities and the Gauss integral theorem.In case of the trilinear form b 2 , we can proceed in the same fashion.Lemma 4.3.Choose any U, U, W ∈ X, U = (v, θ, c), U = ( v, θ, c), W = (w, η, q), and denote Then there exists a constant C independent of U, U and W such that where Proof.According to the Taylor formula and (2.2), we have and hence it follows from the Hölder inequality that the mappings F 1 and F 2 are well defined and that Further, using (2.4) and (2.5), we infer that the mappings Φ 1 and Φ 2 are well defined too and that Thus, applying the trace theorems and Lemma 4.2, we obtain (4.15).Using (2.2)-(2.5), the Taylor formula and the Hölder inequality, we deduce that The operator A is compact, strongly continuous and satisfies for any U ≡ ( v, θ, c) ∈ X and any W ≡ (w, η, q) ∈ X where C is independent of U and W. The operator L is linear and continuous and satisfies for any U ∈ X Then, according to Lemmas 4.3, 3.1 and 4.1, we have a(U, •), l(U, •) ∈ X for any U ∈ X and hence it follows from the Riesz representation theorem that there exist uniquely determined operators A, L : X → X satisfying Clearly, the operator L is linear and the equivalence (4.24) holds.Consider any sequence Then, according to (4.16) and (4.7), [10, p. 106, Theorem 6.1]), the compactness and linearity of the trace operator γ : H 1 (Ω) → L 2 (∂Ω) (cf.[10, p. 107, Theorem 6.2]) and the fact that any weakly convergent sequence is bounded, we deduce that A(U n ) → A(U) in X.Thus, the operator A is strongly continuous and since the space X is (as a Hilbert space) reflexive, the operator A is compact. The inequalities (4.27) and (4.28) follow using Theorem 3.1 and the inequality (4.7).Since L is linear, its continuity is a consequence of (4.27). Since the solutions of the problem (4.2)-(4.5)do not depend on the particular choice of the functions v b , θ b and c b , we can use the following lemma to reduce the influence of the second term on the right-hand side of (4.26).For proving the solvability of the problem (4.2)-(4.5) in case of small constants α 1 , . . ., α 4 , we shall apply the Leray-Schauder principle which is formulated in the following theorem. Then there exists a positive constant M such that the problem (4.2)-(4.5) is solvable for any constants α 1 , α 2 , α 3 , α 4 ∈ (0, M) and α 6 , α 7 ∈ IR + and for any functions Proof.Consider any constants α 5 , α 6 , α 7 ∈ IR + and any functions ) and (4.29).Let ε be the constant from the assumption (4.29) and choose any ϑ ∈ (ε, 1).Set and consider any constants α 1 , α 2 , α 3 , α 4 ∈ (0, M).According to [7, p. Further, let A, L : X → X be the operators from Theorem 4.2.According to (4.27), we have (2 + ϑ) U X ≤ 3 L U X for any U ∈ X and since L is linear, there exists a continuous linear inverse operator L −1 .Let us assume that the problem (4.2)-(4.5) is not solvable.Then, due to the fact that the operator L −1 A is compact, it follows from Theorems 4.2 and 4.3 that the solutions of the equations U = −λ L −1 A(U), λ ∈ (0, 1), form an unbounded set.Thus, there exist sequences {λ n } ∞ n=1 ⊂ (0, 1) and and U n X > n.Denoting U n = ( v n , θ n , c n ), we obtain by (4.28) and (4.26) Let us set U n = U n / U n X and denote U n = ( v n , θ n , c n ).Since any Hilbert space is reflexive and the sequences {λ n } ∞ n=1 and { U n } ∞ n=1 are bounded, we can assume without loss of generality that there exist λ 0 ∈ <0, 1> and U ≡ ( v, θ, c) ∈ X such that λ n → λ 0 and U n U in X.Clearly, U X ≤ 1 (cf.[12, p. 209]).By (4.31), we have and hence, applying (4.8), (4.10) and the arguments used for proving the strong continuity of A in the proof of Theorem 4.2, we obtain as a limit of (4.32) Consider any w ∈ V and set W = (w, 0, 0).Then W ∈ X and it follows from (4.30) that, for any Since the first term on the left-hand side converges to zero, we deduce from (4.25) that λ n b 1 ( v n , v n , w) → 0. Thus, using the same arguments as above, we obtain which means that the inequality (4.33) holds for any v b satisfying the conditions (4.1).This is, however, not possible since, according to the assumption (4.29), there exists v b satisfying (4.1) and the inequality Therefore, the problem (4.2)-(4.5) is solvable. Remark 4.2.The Leray-Schauder principle was already used by Ladyženskaja [9] for proving the weak solvability of the stationary incompressible Navier-Stokes equations.However, the proof given in [9] is not correct.The substantial shortcoming is an incorrect argumentation why the inequality Then, according to (4.26) and (4.28), the operator A ≡ A + L satisfies the set of all the solutions of the problem (4.2)-(4.5). Then we have for any positive real numbers The functions f 1 , f 2 , ϕ 1 , ϕ 2 satisfy the relations (2.2)-(2.5). Proof.The statement immediately follows by writing θ and c as θ (θ/ θ) and ĉ (c/ĉ), respectively, in (4.2)-(4.5).Now, as a consequence of the two above theorems, we obtain the main result of this paper.Defining the functions f 1 , f 2 , ϕ 1 , ϕ 2 as in Theorem 4.5, it follows from Theorem 4.4 that the set is nonempty.Thus, according to Theorem 4.5, the set is nonempty as well and hence the problem The remainder of the paper is devoted to investigations of the uniqueness of the weak solution.Similarly as for the Navier-Stokes equations, we shall be able to prove the uniqueness for small data.First, let us introduce some notations.In consequence of the trace theorems and the Friedrichs inequality (cf.[10, pp. 84 and 20]) there exist finite constants depending only on Γ W , Γ LG and Ω.Further, we denote For proving the uniqueness of a weak solution, we shall need certain a priori estimates which we establish in the following lemma. 3 be an arbitrary function satisfying γ(w) = z and let us set <d s (ζ), z> = d(ζ, w) .(3.14) Then (3.14) defines a continuous linear mapping d s : H 1 (Ω) → γ( W) which does not depend on the choice of the extension m in the definition of the mapping d.Moreover, for ζ ∈ H 2 (Ω), we have Remark 3 . 1 .Remark 3 . 2 .Remark 3 . 4 . .23) The weak solution does not depend on the particular choice of the functions v b , θ b and c b satisfying (3.17).In view of (3.21) and (2.7), we have b(v, λ) = 0 ∀ λ ∈ L 2 (Ω) and hence any weak solution satisfies the condition div v = 0. Remark 3.3.Since the pressure p is determined by (1.1), resp.by (3.20), up to an arbitrary additive constant, we consider only pressures with zero mean value.If Γ LG is plane, then, in (3.20), the bilinear form a 1 can be replaced by Ω ∇v • ∇w dx . 3 , where we used the fact that (I − m ⊗ m) w ∈ W ∀ w ∈ W and (I − n ⊗ n) ∇ s = ∇ s .Again, the terms in the square brackets are continuous and hence we deduce that the Neumann boundary condition (1.6) is also fulfilled in the classical sense.The validity of the equations (1.3) and (1.4) and of the Neumann boundary condition in (1.7) can be proven in the same fashion.For proving the Neumann boundary condition in (1.8), we have to apply Proposition 1.1 from [10, p. 56], since n| Γ LS is not continuous in general.The fulfilment of the Dirichlet boundary conditions immediately follows from (3.19). ( 4 .Remark 4 . 3 . 33) is valid for any extension v b with the same function v. Relations analogous to (4.33) and (4.34) can be also obtained when proving the weak solvability of the stationary incompressible Navier-Stokes equations using the contradiction argument developed by Leray (cf.[6, p. 55]).Leray himself considered only problems with Dirichlet boundary conditions v b prescribed on the whole boundary ∂Ω and therefore he could use the fact that v ∈ V. Assuming, in addition, that the flux of v b through each connected component of ∂Ω vanishes, he then showed that, in consequence of (4.34) (with λ 0 = 0 and V replaced by V), any divergence-free extension v b of v b satisfies b 1 ( v, v, v b ) = 0.In case of mixed boundary conditions for the velocity of the type we deal with, a proof of such a conclusion still remains an open problem. Remark 4 . 4 . Let ϑ, α 1 , . . ., α 7 and f 1 , f 2 , ϕ 1 , ϕ 2 , v b , θ b , c b be defined as in the proof of Theorem 4.4 and let us assume that the assumption (4.29) holds for any v ∈ V with the same function v b , i.e., that [8] Theorem 4.4 can be proved by means of Galerkin's method.The validity of (4.29) and (4.35) is known if Γ LG = Ø and the flux of v b through each connected component of ∂Ω vanishes (cf.[7, p. 287, Lemma 2.3]).Unfortunately, if meas 2 (Γ LG ) = 0, such general results are not available.Of course, the assumptions (4.29) and (4.35) are satisfied if α 5 and v b are sufficiently small but, in practical cases, this requirement is usually too restrictive.Some sufficient conditions for the validity of (4.29) and (4.35) were derived in[8], however, their fulfilment is generally not easy to verify.Nevertheless, it can be shown that, for a rotationally symmetrical domain Ω corresponding to the Czochralski method (cf.Fig.1) and for v b representing rotational motions of the crucible and the crystal, assumptions (4.29) and (4.35) always hold.This result is very important in context of the Czochralski method since the non-rotational components in the Dirichlet boundary conditions for the velocity can be mostly neglected and it is sufficient to consider them only in the Neumann boundary condition(1.8).That leads to a problem which is solvable under the assumptions from Section 2.For proving the solvability of (4.2)-(4.5)for large values of α 1 , . . ., α 4 , the following property of the solution set is essential.For arbitrary positive real numbers α 1 , . . ., α 7 and arbitrary functions f 1
6,210.4
1998-01-01T00:00:00.000
[ "Mathematics", "Physics" ]
Particle Swarm Optimization Algorithm Based on Chaotic Sequences and Dynamic Self-Adaptive Strategy To deal with the problems of premature convergence and tending to jump into the local optimum in the traditional particle swarm optimization, a novel improved particle swarm optimization algorithm was proposed. The self-adaptive inertia weight factor was used to accelerate the converging speed, and chaotic sequences were used to tune the acceleration coefficients for the balance between exploration and exploitation. The performance of the proposed algorithm was tested on four classical multi-objective optimization functions by comparing with the non-dominated sorting genetic algorithm and multi-objective particle swarm optimization algorithm. The results verified the effectiveness of the algorithm, which improved the premature convergence problem with faster convergence rate and strong ability to jump out of local optimum. Introduction Particle swarm optimization (PSO) algorithm is a swarm intelligence optimization algorithm, which derives agglomeration of organism behavior, such as a simulation of the behavior of a flock of birds or fish.Compared to other intelligent algorithms, PSO algorithm has simple structure, less parameters and is easy M. S. Li pattern recognition.In particular, it is applicable to solving the problems of nonlinear, multipolar and non-differentiable and complex optimization [1] [2] [3]. However, the standard PSO algorithm also has shortcomings such as premature convergence and bad local searching ability similar to other intelligent algorithms [4] [5] [6].For example, in the optimization of complex problems in high-dimension, the population may have accumulated to a certain point of stagnation without finding the global optimization point, forming premature convergence.In other words, the premature convergence problem does not guarantee that the algorithm can converge to the global extreme point.Meanwhile, in the PSO algorithm search process, when the particle is approaching or entering the most advantageous region, the convergence speed is obviously slow. That is, in the later period of particle optimization, the search ability is poor. Thus, the application of PSO algorithm is restricted. For the lack of PSO algorithm, the researchers propose many improvement strategies [7].Inertia weighting factor [8], contraction factor and adaptive mutation operator are the most representative, such as linear decrease method [9], fuzzy adaptive method [10], distance information method and other inertial coefficients adaptive adjustment methods [11], PSO algorithm with compression factor, the PSO algorithm of adaptive mutation operator, etc.In addition, the PSO algorithm and the hybrid PSO algorithm combined with the PSO algorithm, synergy polices, chaos theory [12] and other algorithms [13] are also attracted by the researchers, such as quantum PSO algorithm with chaotic mutation operator [14].In addition, there are also many researches on discrete PSO algorithm, multi-objective PSO algorithm [15] [16], etc.At present, the improvement of PSO algorithm mainly focuses on two aspects: adjustment of algorithm parameters and update of particle structure and trajectory.The aim is to make the algorithm solve or improve the local search slow, precocious convergence and so on, and improve the convergence speed and accuracy of the algorithm to improve the performance of the algorithm [17].Although the proposed particle swarm improvement algorithm improves both performance and efficiency, it is difficult to improve the local search ability of the algorithm while avoiding precocious convergence.To provide better, more efficient, and cheaper particle swarm algorithms, academics and industry researchers have been exploring and experimenting with new approaches [18]. In order to improve search precision and convergence speed of the standard PSO algorithm, this paper tries to propose a more efficient and higher convergence speed algorithm by combining chaos theory with dynamic adaptive weight adjustment strategy.A new chaos self-adaptive particle swarm optimization algorithm (CSAPSO) was proposed.The algorithm improves the convergence speed by adaptive adjustment strategy evolution inertia weight.The chaotic sequence of chaos theory is used to optimize the learning factor of the algorithm, so that it can get out of the local optimum when it comes to precocious convergence.Finally, according to the solution experiment of multi-objective optimization problem, by comparing and analyzing the CSAPSO algorithm with the Journal of Computer and Communications standard PSO algorithm and the classical multi-objective algorithm, the feasibility and validity of the algorithm are verified and the convergence speed and accuracy are discussed. Standard PSO Particle Swarm Optimization algorithm (PSO) is a group evolution algorithm proposed by scholars Eberhart and Kennedy [19] based on the social behaviors of birds in 1995.The PSO algorithm is derived from the behavior characteristics of biological groups and is used to solve the optimization problems.It has the advantages of easy description, easy implementation, little adjustment parameter, fast convergence speed and low calculation cost.And there is no high requirement for memory and CPU speed.In the process of particle optimization, the potential solution of the problem is assumed to be a "particle" in the n-dimensional space, and the particle will fly at a certain speed and direction in the solution space.In the iterative process, all particles use two global variables to represent the best position of the particle itself (pbest) and the best position of all particles (gbest).It is assumed that, in an n-dimensional search space, the particle population is composed of m particles.The position of the ith particle is denoted as and the velocity is denoted as , The global extreme of the population of particles is ) ( ) ; ω is called an inertial weight factor, it makes the particles keep sport inertia and have the ability to expand search space; C 1 and C 2 are the learning factors, which represent the weight of each particle to the statistical acceleration item of the extremum position; rand () is a random number within (0, 1), , CSAPSO Algorithm The standard PSO algorithm has its own limitations, such as the implementation process of the algorithm has a great relationship with the value of the parameters.When the algorithm is applied to the complex optimization problem of high dimension, the algorithm tends to converge to some extreme point and stagnates when the global optimum is not found, that is, precocious convergence is easy to occur.These points can be a point in the local extreme point or local extreme point area.In addition, the convergence rate of the algorithm becomes slow when approaching or entering the optimal solution area.The early convergence rate of PSO algorithm is fast, but in the later stage, when the algorithm converges to local minimum, due to the lack of effective local search mechanism, the local search speed is slow.According to the formula of particle velocity update of PSO algorithm, the change of particle velocity is determined by three factors: 1) The inertial weight factor, which represents the velocity information at the previous moment.It indicates the relationship between current speed and forward speed.2) The cognitive factor, which is the development capacity coefficient, represents the error of the optimum of the particle itself, and reflects the local mining and development capability of the particle.3) The exploration factor, which is the social sharing ability coefficient, represents the error of global optimum, the information sharing and cooperation ability between the particles.Under the circumstances, the inertia coefficient determines the search step length.When it is larger, it is good for global search.When it is smaller, it is helpful for local exploration.Cognitive factors and exploration factors are collectively called learning factors, which represent the effect proportion of the optimum of the particle itself and global optimum.By adjusting the learning factors properly, the global and local search of the particles can be weighed.When the algorithm is in precocious convergence, it is possible to change the exploration factor to achieve out of local optimal. In order to improve the precocious convergence of the algorithm and improve the convergence speed of the algorithm, this paper uses adaptive weight adjustment strategy to realize the dynamic adjustment of inertia coefficient.The chaotic sequence generated by chaotic mapping is used to optimize the parameters of learning factor C 1 and C 2 , and a chaos self-adaptive particle swarm optimization algorithm (CSAPSO) is obtained.The inertial weight factor ω is adjusted by Formula (3). ( ) ( ) where, ω max and ω min respectively represent the maximum and minimum values of inertial weight; Pgbest (k) represents the global optimal for the kth iteration; Plbest ave represents the local optimal average of all particles; k max is maximum number of iterations; k is current iteration times. Learning factor C 1 and C 2 are adjusted by chaotic sequence generated by chaotic mapping.This paper uses the typical Lorenz's equation to generate chaotic sequences, as shown in Formula (4). ( ) where, parameters a, b and c are controlled parameters, which are 0.2, 0.4 and Journal of Computer and Communications 5.7 respectively.The learning factor (c 1 , c 2 ) is defined as: Because the change of chaotic variables is random, ergodic and regular, the algorithm can maintain the diversity of population, effectively overcome the problem of precocious convergence, and improve global search performance. The CSAPSO algorithm performs the following process: 1) To initialize the particle group The position and velocity of particles in PSO algorithm are initialized.The initial position and velocity of the particles are generated randomly.The current position of each particle is used as the particle individual extremum, and the optimal value of the individual extremum is selected as the global optimal value. 2) To calculate the adaptive value of group particles. 3) The adaptive value of each particle is compared with the adaptive value of the best position it has passed.If it is better, the current position is the best position of the particle. 4) The adaptive value of each particle is compared with the adaptive value of the global best position, and if it is better, the current position is the global best position. 5) The learning factor C 1 , C 2 and inertial weight ω were obtained respectively, and the velocity and position of the particles were updated and optimized 6) If the end condition of the algorithm is satisfied, the global best position is the optimal solution, saving the result and ending.Otherwise return to Step (2). Experiment Function and Evaluation In order to test the performance of CSAPSO algorithm, this paper selects the multi-objective optimization test functions proposed by Schaffer [20] and Deb [21] as an experimental case.Multi-objective optimization problem is the most typical optimization problems, due to the constant contradiction and constraint among targets, it is difficult to achieve the optimal at the same time, as well as one of the optimization of goals must be at the expense of the other goals.The solutions to such problems are usually not unique, but a series of optimal solutions, also called non-inferior solutions.A collection of non-inferred solutions is often referred to as Pareto optimal solution.Because intelligent algorithm can search multiple solutions of solution space in parallel, multi-objective optimization is more suitable to verify the performance of intelligent algorithm.The multi-objective optimization test functions used in this paper are shown in Table 1. In order to evaluate the merits of non-inferior solutions, this paper adopts the convergence index and the distribution index to evaluate the performance of the algorithm, and the indexes of convergence and distribution uniformity are respectively defined as follows [2] [3]: 4 , 4 Convergence index (GD), GD is used to describe the distance between the ungoverned solution that the algorithm searches for and the optimal front-end of the real Pareto. where, N represents the number of ungoverned solutions that the algorithm searches for, 2 i d represents the shortest Euclidean distance between the non-inferior solution i and all solutions in the optimal front-end of the real Pareto. 2) Distribution index (SP), SP is used to evaluate the uniformity of distribution of ungoverned solution. ( ) where, N is the number of ungoverned solutions, and represents the shortest distance between the ith non-inferior solution in the target space and all solutions in the optimal front-end of the real Pareto. Experimental Results The CSAPSO algorithm was used to experiment with SCH1, SCH2, ZDT2 and ZDT3.The algorithm parameter is set to: the particle size is 50; the maximum iteration number is 100; the maximum and minimum values of inertia weight are 0.9 and 0.3 respectively; inertia weight ω and learning factor C 1 and C 2 are obtained according to Formulas (3) and ( 5) respectively.The Pareto non-inferior solutions of each function are shown in Figures 1-4.In the target function space, non-inferior optimal target domain is the boundary of the fitness value region, which is the effective interface.It can be seen from the experimental results that the four test functions accurately give the effective interface, and the complete Pareto non-inferior solution can be obtained. In particular, for the discrete problem of ZDT3, the algorithm also gives a more accurate non-inferior solution.In general, the number of Pareto solutions obtained by the algorithm is more and the distribution is more uniform.The accuracy and reliability of CSAPSO algorithm are verified. Through CSAPSO algorithm runs 30 times for each test function, the convergence index GD, distribution index SP and the mean value of computed time CT were respectively calculated, and four test function evaluation index are calculated, the results are shown in Table 2.The evaluation index GD, SP and CT confirmed the feasibility, accuracy and efficiency of the CSAPSO algorithm for solving multi-objective optimization problems.GD indicates that the non-inferior solution is very close to the optimal front end of the real Pareto.SP shows that the non-inferior solution has good distribution.CT shows that the time spent running is within acceptable limits. In order to test the superiority of the algorithm in multi-objective optimization solution, comparing CSAPSO algorithm in this paper with classic non-poor classification multi-objective genetic algorithm(NSGA-II), multi-objective particle swarm optimization, the statistical comparison results are shown in Table 3. According to GD of Table 3, the convergence of CSAPSO algorithm is better than the other two algorithms.It is shown that the optimal front distance between the non-inferior solution and real Pareto is smaller.That is, the solution is closer to the real solution.The SP of Table 3 shows that the distribution of non-inferior solutions obtained by the CSAPSO algorithm is better, that is, the distribution of the non-inferior solution of the algorithm is more uniform than the other two algorithms.For CT, the execution time of CSAPSO algorithm is between two algorithms, lower than NSGA II algorithm but higher than MOPSO.The reason for this is that the standard PSO algorithm is according to the equal step and flying in a single direction search, while CSAPSO algorithm dynamically adjusted with the flight process of particle, the process of dynamic adjustment will consume more time.Although the CSAPSO algorithm spends more computation time, the convergence and distribution of non-inferior solutions are better than the other two algorithms, which can obtain more and more evenly distributed feasible solutions. In conclusion, through the CSAPSO algorithm for the numerical experiments of four multi-objective optimization problems, compared with the classical multi-objective optimization of NSGA II algorithm and MOPSO algorithm, we can know, CSAPSO algorithm has a better comprehensive performance.Algorithm improves the convergence speed by dynamic adaptive mechanism.Through the chaotic learning mechanism, the precocious convergence problem is improved. Conclusions This paper presents a chaotic self-adaptive particle swarm optimization algorithm (CSAPSO).The algorithm uses chaos theory and dynamic adaptive adjustment strategy to optimize the parameters in PSO algorithm, improve the precocious convergence of PSO algorithm, and improve the convergence speed. By the experiment of four standard test functions, the proposed algorithm can be used to solve the multi-target problem, and the obtained non-inferiority solution can get a good approximation of the optimal solution set of Pareto and distribute evenly.By comparing with other algorithms, CSAPSO algorithm has better property, which can provide practical reference value for many optimization problems in the project.In the future, the convergence strategy and mathematical proof of the PSO algorithm can be discussed in-depth. . During the k + 1 iteration, the particle updates its speed and position through Formulas (1) and (2). x are respectively the velocity and position of particle i in d-dimension kth iteration; , k i d p is the position of the individual extremum of particle i in d-dimension, is the position of the global extremum of the whole population in d-dimension. et al. Table 1 . Test functions used in this paper. Table 3 . Comparison results of three algorithms.
3,750.6
2017-09-30T00:00:00.000
[ "Computer Science" ]
An Erd\"os-R\'ev\'esz type law of the iterated logarithm for reflected fractional Brownian motion Let $B_H=\{B_H(t):t\in\mathbb R\}$ be a fractional Brownian motion with Hurst parameter $H\in(0,1)$. For the stationary storage process $Q_{B_H}(t)=\sup_{-\infty<s\le t}(B_H(t)-B_H(s)-(t-s))$, $t\ge0$, we provide a tractable criterion for assessing whether, for any positive, non-decreasing function $f$, $\mathbb P(Q_{B_H}(t)>f(t)\, \text{ i.o.})$ equals 0 or 1. Using this criterion we find that, for a family of functions $f_p(t)$, such that $z_p(t)=\mathbb P(\sup_{s\in[0,f_p(t)]}Q_{B_H}(s)>f_p(t))/f_p(t)=\mathscr C(t\log^{1-p} t)^{-1}$, for some $\mathscr C>0$, $\mathbb P(Q_{B_H}(t)>f_p(t)\, \text{ i.o.})= 1_{\{p\ge 0\}}$. Consequently, with $\xi_p (t) = \sup\{s:0\le s\le t, Q_{B_H}(s)\ge f_p(s)\}$, for $p\ge 0$, $\lim_{t\to\infty}\xi_p(t)=\infty$ and $\limsup_{t\to\infty}(\xi_p(t)-t)=0$ a.s. Complementary, we prove an Erd\"os--R\'ev\'esz type law of the iterated logarithm lower bound on $\xi_p(t)$, i.e., $\liminf_{t\to\infty}(\xi_p(t)-t)/h_p(t) = -1$ a.s., $p>1$; $\liminf_{t\to\infty}\log(\xi_p(t)/t)/(h_p(t)/t) = -1$ a.s., $p\in(0,1]$, where $h_p(t)=(1/z_p(t))p\log\log t$. Introduction and Main Results The analysis of properties of reflected stochastic processes, being developed in the context of classical Skorokhod problems and their applications to queueing theory, risk theory and financial mathematics, is an actively investigated field of applied probability. In this paper we analyze 0-1 properties of a class of such processes, that due to its importance in queueing theory (and dual risk theory) gained substantial interest; see, e.g., [1,2,13,14] or novel works on γ-reflected Gaussian processes [7,12]. Consider a reflected (at 0) fractional Brownian motion with drift Q BH = {Q BH (t) : t ≥ 0}, given by the following formula With no loss of generality in the reminder of this paper we assume that the drift parameter c ≡ 1. An important stimulus to analyze the distributional properties of Q BH and its functionals stems from the Gaussian fluid queueing theory, where the stationary buffer content process in a queue which is fed by B H and emptied with constant rate c = 1 is described by (2); see e.g. [13]. In particular, in the seminal paper by Hüsler and Piterbarg [8] the exact asymptotics of one dimensional marginal distributions of Q BH was derived; see also [3,4,6] for results on more general Gaussian input processes. The purpose of this paper is to investigate the asymptotic 0-1 behavior of the processes Q BH . Our first contribution is an analog of the classical finding of Watanabe [18], where an asymptotic 0-1 type of behavior for centered stationary Gaussian processes was analyzed. is finite or infinite. The exact asymptotics, as u grows large, of the probability in I f was found by Piterbarg [14,Theorem 7]. Namely, for any T > 0, , Φ is the distribution function of the unit normal law and the constants a, b, A, H BH are given explicitly in Section 2. Since relation (3) also holds when Theorem 1 provides a tractable criterion for settling the dichotomy of , p ∈ R, H ∈ (0, 1). One can check that, as u → ∞, Hence, for any p ∈ R, This result extends findings of Zeevi and Glynn [19,Theorem 1], where it was proven that the above convergence holds weakly as well as in L p for all p ∈ [1, ∞). Now consider the process ξ p = {ξ p (t) : t ≥ 0} defined as ξ p (t) = sup{s : 0 ≤ s ≤ t, Q BH (s) ≥ f p (s)}. Let, cf. (5), The second contribution of this paper is an Erdös-Révész type of law of the iterated logarithm for the process ξ p . We refer to Shao [16] for more background and references on Erdös-Révész type law of the iterated logarithm and a related result for centered stationary Gaussian processes; see also De ֒ bicki and Kosiński [5] for extensions to order statistics. If p ∈ (0, 1], then then it follows that Theorem 2 shows that for t big enough, there exists an s in [t − h p (t), t] (as well as in [t, t + h p (t)] by (6)) such that Q BH (s) ≥ f p (s) and that the length h p (t) of the interval is the smallest possible. This shines new light on results, which are intrinsically connected with Gumbel limit theorems; see, e.g., [11], where the function h p (t) plays crucial role. We shall pursue this elsewhere. The paper is organized as follows. In Section 2 we introduce some useful properties of storage processes fed by fractional Brownian motion. In Section 3 we provide a collection of basic results on how to interpret extremes of the storage process Q BH as extremes of a Gaussian field related to the fractional Brownian motion B H . Furthermore, in Section 4 we prove lemmas, which constitute building blocks of the proofs of the main results. Properties of the storage process In this section we introduce some notation and state some properties of the supremum of the process Q BH as derived in [10,14]. We begin with the relation is a Gaussian field. Note that the self-similarity property of B H implies that the field Z u has the same distribution for any u. Thus, we do not use u as an additional parameter in the following notation whenever it is not needed; let Z(s, τ ) := Z 1 (s, τ ). Furthermore, the field Z(s, τ ) is stationary in s, but not in τ . The variance σ 2 Z (τ ) of the field Z(s, τ ) equals ν −2 (τ ) and σ Z (τ ) has a single maximum point at Taylor expansion leads to Let us define the correlation function of the process Z u as follows By series expansion we find for any fixed τ 1 < τ 0 < τ 2 and τ, τ ′ with 0 < τ 1 < τ, τ ′ < τ 2 < ∞, provided that | u us−u ′ s ′ | and | u ′ us−u ′ s ′ | are sufficiently small. For 2H = 1, we have r u,u ′ (s, τ, s ′ , τ ′ ) = 0 since the increments of Brownian motion on disjoint intervals are independent. Therefore, (9) r * (t) := sup for λ = 2 − 2H > 0, t sufficiently large and some positive constant K depending only on H, τ 1 and τ 2 . Similarly, from (8) it follows that for any fixed M there exists δ ∈ (0, 1) such that for sufficiently small m. 2.1. Asymptotics. Due to the following lemma, while analyzing tail asymptotics of the supremum of Z, we can restrict the considered domain of (s, τ ) to a strip with |τ − τ 0 | ≤ log v/v. For a fixed T, θ > 0 and some v > 0, let us define a discretization of the set [0, T ] × J(v) as follows Along the same lines as in [10, Lemma 6] we get the following lemma. Finally, it is possible to approximate tail asymptotics of the supremum of Z on the strip [0, T ] × J(v) by maximum taken over discrete time points. The proof of the following lemma follows line-by-line the same as the proof of [14, Lemma 4] and thus we omit it. Similar result can be found in, e.g., [10,Lemma 7]. It follows easily that H θ BH → H BH as θ → 0, so that the above asymptotics is the same as in Lemma 1 when the discretization parameter θ decreases to zero so that the number of discretization points grows to infinity. Auxiliary Lemmas We begin with some auxiliary lemmas that are later needed in the proofs. The first lemma is a slightly modified version of [11,Theorem 4 Lemma 4 (Berman's inequality). Suppose that ξ 1 , . . . , ξ n are normal random variables with correlation matrix Λ 1 = (Λ 1 i,j ) and η 1 , . . . , η n similarly with correlation matrix The following lemma is a general form of the Borel-Cantelli lemma; cf. [17]. Lemma 6. For any ε ∈ (0, 1), there exist positive constants K and ρ depending only on ε, H, p and λ such that Proof. Let ε ∈ (0, 1) be some positive constant. For the reminder of the proof let K and ρ be two positive constants depending only on ε, H, p and λ that may differ from line to line. For any k ≥ 0 put s 0 = S, From this construction, it is easy to see that the intervals I k are disjoint. Furthermore, δ(I k , I k+1 ) = εx k , and 1 − ε ≤ y k /x k ≤ 1, for any k ≥ 0 and sufficiently large S. Note that, for any k ≥ 0, |I k | ∼ f p (S) as S grows large, therefore if T (S, ε) is the smallest number of intervals {I k } needed to cover [S, T ], then T (S, ε) ≤ [(T − S)/(f p (S)(1 + ε))]. Moreover, since f p (T )/f p (S) is bounded by the constant C > 0 not depending on S and ε, it follows that, x k /x t ≤ C for any 0 ≤ t < k ≤ T (S, ε). Now let us introduce a discretization of the setĨ k × J(v k ) as in Section 2.2. That is, for some θ > 0, define grid points Since f p is an increasing function, it easily follows that, where the last inequality follows from Berman's inequality with Estimate of P 1 . Note that we can use the fact that Z x k has the same distribution as Z 1 ≡ Z for any x k . Since the process Z is stationary with respect to the first variable, from Lemma 3, for any ε ∈ (0, 1), sufficiently large S and small θ, Then, by (7) combined with (3), Estimate of P 2 . 6 where the last inequality holds provided that k − t ≥ s 0 with s 0 sufficiently large. Therefore, c.f. (9), r * k,t := sup 0≤l≤L k ,0≤p≤Lt |n|≤N k ,|m|≤Nt |r x k ,xt (s k,l , τ k,n , s t,p , τ t,m )| ≤ r * ((k − t)ε) ≤ K(k − t) −λ ≤ min(1, λ)/4. Let S > 0 be any fixed number, a 0 = S, y 0 = f p (a 0 ) and b 0 = a 0 + y 0 . For i > 0, define From this construction it is easy to see that the intervals M i are disjoint, ∪ i j=0 M j = (S, b i ] and |M i | = 1. Now let us introduce a discretization of the setM i × J(v i ) as in Section 2.2. That is, for some θ > 0, define grid points With the above notation, we have the following lemma. Lemma 7. For any ε ∈ (0, 1) there exist positive constants K and ρ depending only on ε, H, p and λ such that, with for any T − f p (S) ≥ S ≥ K, with f p (T )/f p (S) ≤ C and C being some universal positive constant. Similarly as in the proof of Lemma 6 we find that Berman's inequality implies withr yi,yj (s i,l , τ i,n , s j,p , τ j,m ) = −r yi,yj (s i,l , τ i,n , s j,p , τ j,m ). Estimate of P ′ 1 . By Lemma 1 the correction term θ H 2 i /v i does not change the order of the asymptotics of the tail of Z. Furthermore, the tail asymptotics of the supremum on the strip (s, τ ) ∈M i × J(v i ) are of the same order if τ ≥ 0. Hence, for every ε > 0, following the same lines of reasoning as in the estimation of P 1 in Lemma 6, provided that S is sufficiently large. Completely similarly to the estimation of P 2 in the proof of Lemma 6, we can get that there exist positive constants K and ρ such that, for sufficiently large S, The next lemma is a straightforward modification of [ 2 , it is true without the additional condition. Proof of the main results Proof of Theorem 1. Note that the case I f < ∞ is straightforward and does not need any additional knowledge on the process Q BH apart from the stationarity property. Indeed, consider the sequence of intervals M i as in Lemma 7. Then, for any ε > 0 and sufficiently large T , and the Borel-Cantelli lemma completes this part of the proof since f is an increasing function. Now let f be an increasing function such that I f ≡ ∞. Using the same notation as in Lemma 6 with f instead of f p , we find that, for any S, ε, θ > 0, For sufficiently large S and θ; c.f. estimation of P 1 , we get Note that The first limit equals to zero as a consequence of (19). The second limit equals to zero because of the asymptotic independence of the events E k . Indeed, there exist positive constants K and ρ, depending only on H, ε, λ, such that for any n > m, by the same calculations as in the estimate of P 2 in Lemma 6 after realizing that, by Lemma 8, we might restrict ourselves to the case when (18) holds. Therefore P (E c i i.o.) = 1, which completes the proof. Proof of Theorem 2 In order to make the proof more transparent we divide it on several steps. Step 1. Let p > 1. Then, for every ε ∈ (0, 1 4 ), Since h p (t) = O(t log 1−p t log 2 t), then, for p > 1, S k ∼ T k , as k → ∞, and from Lemma 6 it follows that Moreover, as k → ∞, Now take T k = exp(k 1/p ). Then, Hence, by the Borel-Cantelli lemma, we have Since ξ p (t) is a non-decreasing random function of t, for every T k ≤ t ≤ T k+1 , we have For p > 1 elementary calculus implies which completes the proof of this step. Step 2. Let p > 1. Then, for every ε ∈ (0, 1), Proof. As in the proof of the lower bound (Step 1), we put It suffices to show P (B n i.o.) = 1, that is Define J k to be the biggest number such that b k Since f p is an increasing function, Analogously to (14), define a discretization of the setM k i × J(v k i ) as follows Observe that By Lemma 2, for sufficiently large m and some K 1 , K 2 > 0, the first sum is bounded from above by Note that by (11), for sufficiently large m, the term in (23) is bounded from above by In order to complete the proof of (22) we only need to show that Similarly to (20), we have Now from Lemma 7 it follows that for every k sufficiently large. Hence, Applying Berman's inequality, we get for t < k where . For any 0 ≤ i ≤ J k , 0 ≤ j ≤ J t , 0 ≤ l ≤ L k i , 0 ≤ p ≤ L t j , and t < k, y k i s k i,l − y t j s t j,p = a k i + y k i lq k i − a t j + y t j pq t j ≥ S k − T t ≥ S k − T k−1 ≥ where the last inequality holds for k large enough, since S k+1 − T k T k+1 − T k ∼ 1, as k → ∞.
3,742.2
2016-12-29T00:00:00.000
[ "Mathematics" ]
Single-Step Extraction Coupled with Targeted HILIC-MS/MS Approach for Comprehensive Analysis of Human Plasma Lipidome and Polar Metabolome Expanding metabolome coverage to include complex lipids and polar metabolites is essential in the generation of well-founded hypotheses in biological assays. Traditionally, lipid extraction is performed by liquid-liquid extraction using either methyl-tert-butyl ether (MTBE) or chloroform, and polar metabolite extraction using methanol. Here, we evaluated the performance of single-step sample preparation methods for simultaneous extraction of the complex lipidome and polar metabolome from human plasma. The method performance was evaluated using high-coverage Hydrophilic Interaction Liquid Chromatography-ESI coupled to tandem mass spectrometry (HILIC-ESI-MS/MS) methodology targeting a panel of 1159 lipids and 374 polar metabolites. The criteria used for method evaluation comprised protein precipitation efficiency, and relative MS signal abundance and repeatability of detectable lipid and polar metabolites in human plasma. Among the tested methods, the isopropanol (IPA) and 1-butanol:methanol (BUME) mixtures were selected as the best compromises for the simultaneous extraction of complex lipids and polar metabolites, allowing for the detection of 584 lipid species and 116 polar metabolites. The extraction with IPA showed the greatest reproducibility with the highest number of lipid species detected with the coefficient of variation (CV) < 30%. Besides this difference, both IPA and BUME allowed for the high-throughput extraction and reproducible measurement of a large panel of complex lipids and polar metabolites, thus warranting their application in large-scale human population studies. Introduction Blood plasma is one of the most commonly used biofluids for metabolic phenotyping, specifically in human population studies. This is mainly due to its easy access with minimally invasive sampling and the ability of its metabolic profile to inform about the systemic physiological status. Blood has a vital physiological role in the transport of circulating metabolites; it supplies tissues with nutrients and oxygen, and it carries away the metabolic by-products and carbon dioxide. Human plasma contains a wide diversity of low molecular weight metabolites, including amino acids, other organic acids, fatty acids, sugars, and complex lipids [1]. Lipids represent more than Sample Preparation Methods and Evaluation Workflow The ability of single-step methods to simultaneously extract complex lipids and polar metabolites was evaluated using relative abundance and repeatability of metabolite signal, against the commonly applied protocols, a biphasic extraction with MTBE for lipids, and a single-step MeOH for polar metabolites (Figure 1). The extracts were analyzed with high-coverage targeted profiling using HILIC-MS/MS in positive and in negative ionization mode (see Materials and Methods). These methods targeted a total of 1159 lipids from five different lipid classes (sphingolipids, cholesterol esters, glycerolipids, glycerophospholipids, and free fatty acids, Tables S1 and S2), and 374 polar metabolites from 12 different classes (amino acids and their derivatives, carboxylic acids, acylcarnitines, nucleotides, etc., Tables S3 and S4). The classification of polar metabolites was based on the Human Metabolome Database (HMDB) while the complex lipids were classified according to LipidMaps [4,23]. The protein precipitation efficeincy, the relative abundance and the coefficient of variation of each detected metabolite were used to evaluate the extraction performance for each lipid and polar metabolite class ( Figure 1). Sample Preparation Methods and Evaluation Workflow The ability of single-step methods to simultaneously extract complex lipids and polar metabolites was evaluated using relative abundance and repeatability of metabolite signal, against the commonly applied protocols, a biphasic extraction with MTBE for lipids, and a single-step MeOH for polar metabolites (Figure 1). The extracts were analyzed with high-coverage targeted profiling using HILIC-MS/MS in positive and in negative ionization mode (see Materials and Methods). These methods targeted a total of 1159 lipids from five different lipid classes (sphingolipids, cholesterol esters, glycerolipids, glycerophospholipids, and free fatty acids, Tables S1 and S2), and 374 polar metabolites from 12 different classes (amino acids and their derivatives, carboxylic acids, acylcarnitines, nucleotides, etc., Tables S3 and S4). The classification of polar metabolites was based on the Human Metabolome Database (HMDB) while the complex lipids were classified according to LipidMaps [4,23]. The protein precipitation efficeincy, the relative abundance and the coefficient of variation of each detected metabolite were used to evaluate the extraction performance for each lipid and polar metabolite class ( Figure 1). Firstly, we examined the performance of single-step methods, using MeOH, ethanol (EtOH), BUME, and IPA, to extract lipids. Following the lipid extraction, the protein precipitation efficiency of these four solvents was evaluated by measuring the total protein content in the supernatant and in the pellet (see Materials and Methods section for more details). The equivalent to 95% of protein removal was achieved with all four methods (See Table S5). Secondly, the relative signal abundance of the entire panel of detected lipid species was evaluated in all four plasma extracts ( Figure S1). Methanol demonstrated the poorest performance Firstly, we examined the performance of single-step methods, using MeOH, ethanol (EtOH), BUME, and IPA, to extract lipids. Following the lipid extraction, the protein precipitation efficiency of these four solvents was evaluated by measuring the total protein content in the supernatant and in the pellet (see Materials and Methods section for more details). The equivalent to 95% of protein removal was achieved with all four methods (See Table S5). Secondly, the relative signal abundance of the entire panel of detected lipid species was evaluated in all four plasma extracts ( Figure S1). Methanol demonstrated the poorest performance for complex lipid extraction, with the exception of lysophosphatidylcholines (LPCs) and lysophosphatidylinositols (LPIs). A significantly lower signal for several lipid classes (i.e., sphingomyelins, fatty acids, and lysophospholipids) was also obtained from ethanol extracts, when compared to IPA and BUME. Therefore, MeOH and EtOH were excluded from further evaluation as non-suitable or less performant for complex lipid extraction, respectively. The best candidates, BUME and IPA, were further evaluated against biphasic Matyash method, for their capacity to reproducibly extract lipids and polar metabolites. Relative Signal Abundance BUME and IPA extractions were selected as the best single-step methods when considering the relative abundance of extracted lipids. Therefore, they were further evaluated for lipid and polar metabolite extraction against commonly used biphasic extraction MTBE:MeOH:H 2 O in lipidomics studies, and MeOH in polar metabolome studies. Pooled plasma samples were extracted, and the supernatant of each solvent was evaluated per metabolite and lipid class covered by targeted HILIC-MS/MS methods. To acquire the lipid and polar metabolite profiles, the metabolite separation was based on the interaction with the HILIC stationary phase: amide-based for complex lipids in positive and in negative ionization mode, and amide-based and with zwitterionic exchange for polar metabolites in positive and in negative ionization mode, respectively [11]. The extraction capacity of each solvent was evaluated by the relative abundance of each lipid and metabolite class to the average signal abundance obtained by the reference solvents (MTBE mixture for lipids and MeOH for polar metabolites, Figure 2, Figure 3 and Figures S2-S4). The same plasma extracts were analyzed for the detection and abundance of polar metabolites (as described in Materials and Methods). The HILIC-MS/MS method initially targeted 374 polar metabolites that were classified into 12 categories: proteinogenic amino acids, amino acid derivatives, nucleosides, nucleic acids, alkylamines, short and long chain acylcarnitines (SCACs and LCACs, respectively), carbohydrate conjugates, mono-, di-, and tri-carboxylic acids (Mono-COOHs, Di-COOHs, Tri-COOHs, respectively), and others (comprising glycocholate, hydroquinone, hydroxyphenyllactate, pyridoxine, salsolinol, trigonelline, and tryptamine, see Table S6 for more information). Similar to the complex lipid evaluation, we have evaluated the capacity of the BUME and IPA extractions methods to recover different polar metabolite classes with respect to signal abundance in the MeOH, as the reference method. Significantly higher signals for the proteinogenic amino acids, carboxylic acids and carbohydrates were observed following the extraction with MeOH, while the nucleosides, alkylamines, short and long-chain acylcarnitines, and other polar metabolites showed significantly higher signals following the extraction with IPA, compared to BUME and MeOH (Figures 3 and S7, Table S9). For amino acid derivatives and nucleic acids the signal intensity did not differ significantly between the MeOH, BUME, and IPA extracts. The relative abundance of polar metabolite classes in IPA and BUME extracts as compared to MeOH plasma extract. Values indicated on spider plots represent the relative signal abundance (or fold change) for each single-step method, using IPA or BUME, to the reference MeOH extract. For statistical significance, see Figure S7 in the Supplementary Information (data are provided in Table S9). For the comparison with aqueous phase from Matyash method see Figure S8. The class "Others", i.e., other polar metabolites comprise glycocholate, hydroquinone, hydroxyphenyllactate, pyridoxine, salsolinol, trigonelline, and tryptamine (Table S6). Carboxylic acids comprised mono-, di-, and tricarboxylic acids. The relative abundance of polar metabolite classes in IPA and BUME extracts as compared to MeOH plasma extract. Values indicated on spider plots represent the relative signal abundance (or fold change) for each single-step method, using IPA or BUME, to the reference MeOH extract. For statistical significance, see Figure S7 in the Supplementary Information (data are provided in Table S9). For the comparison with aqueous phase from Matyash method see Figure S8. The class "Others", i.e., other polar metabolites comprise glycocholate, hydroquinone, hydroxyphenyllactate, pyridoxine, salsolinol, trigonelline, and tryptamine (Table S6). Carboxylic acids comprised mono-, di-, and tri-carboxylic acids. The signal abundance recovered (with each extraction method) was evaluated by relative comparison to the average signal of the reference extract, in this case, the MTBE for lipid extraction. Lipid species that were representative of all different lipid classes were successfully extracted by all three solvents tested, with the exception of LPIs, for which the signal in the MTBE extract was close to the limit of detection, as shown in Figure 2. Among sphingolipids, the highest signal was obtained with IPA and BUME extracts for all sub-classes, including sphingomyelins, ceramides, dihydroceramides, lactosylceramides and hexosylceramides. In the case of glycerolipids (MAGs, DAGs, and TAGs) the highest signals were obtained following single-step extraction with BUME and IPA, although the relative signal was not significantly different from the reference signal of the MTBE extract. Among glycerophospholipids, the significantly higher signal was obtained with IPA for PSs. For other subclasses the difference between three solvents was not significant. For lysophospholipids, the relative signal abundance was in general significantly higher for IPA and BUME extracts when compared to MTBE phase ( Figure 2 and Figure S2, Tables S7 and S8). To confirm these observations, six internal standards (IS), analogues for TAGs, PCs, LPCs, PEs, LPEs, and SMs, were spiked to the samples with the addition of organic solvent (see Figure S3). Based on relative signal abundance these internal standards demonstrated the similar tendency as the endogenous metabolites, depending on the lipid class. Globally, a higher signal was obtained for all different IS following IPA and BUME extraction compared to the organic MTBE phase from Matyash extraction, with the exception of TAG-d7, for which the signal was also the most variable ( Figure S3). In addition, in order to evaluate the extent of interference between lipids and polar metabolites when using one-step extraction coupled to HILIC-MS/MS analysis, the most abundant polar metabolites in human plasma (including different amino acids, organic acids, and acylcarnitines) were added to the MRM method for complex lipid analysis. We could appreciate that the selected polar metabolites, in the applied HILIC conditions, did not co-elute with complex lipids, therefore minimizing the potential effect of ion suppression on lipids (see Figures S4 and S5). Polar metabolites were effectively retained, but due to their high hydrophilicity, they eluted (with the exception of hydroquinone and hypoxanthine) after complex lipids in the chromatographic gradient.. Interestingly, similar results were found for the long chain acylcarnitines that did not co-elute with lipids, such as sphingomyelins when profiled in the positive ionization mode. Finally, different organic solvents can lead to different contamination levels from plastic agents and, therefore, cause differences in matrix effects. To examine the potential influence of contaminants on lipid abundances, the background noise from blank extractions was examined using HRMS analysis (see Figure S6). Overall, no significant differences in the contaminant background were observed among the different solvent extracts, with the exception of MTBE extract profile in negative ionization mode, which showed several intense peaks eluting between 1 and 4 min. A few of these peaks likely represent fatty acids (stearic acid with m/z 283.26 and palmitic acid with m/z 255.23) that, in addition to being endogenous lipids, also represent the potential contaminants from plastics (confirmed with matching against MaConDa (https://www.maconda.bham.ac.uk/)). Polar Metabolite Profile The same plasma extracts were analyzed for the detection and abundance of polar metabolites (as described in Materials and Methods). The HILIC-MS/MS method initially targeted 374 polar metabolites that were classified into 12 categories: proteinogenic amino acids, amino acid derivatives, nucleosides, nucleic acids, alkylamines, short and long chain acylcarnitines (SCACs and LCACs, respectively), carbohydrate conjugates, mono-, di-, and tri-carboxylic acids (Mono-COOHs, Di-COOHs, Tri-COOHs, respectively), and others (comprising glycocholate, hydroquinone, hydroxyphenyllactate, pyridoxine, salsolinol, trigonelline, and tryptamine, see Table S6 for more information). Similar to the complex lipid evaluation, we have evaluated the capacity of the BUME and IPA extractions methods to recover different polar metabolite classes with respect to signal abundance in the MeOH, as the reference method. Significantly higher signals for the proteinogenic amino acids, carboxylic acids and carbohydrates were observed following the extraction with MeOH, while the nucleosides, alkylamines, short and long-chain acylcarnitines, and other polar metabolites showed significantly higher signals following the extraction with IPA, compared to BUME and MeOH ( Figure 3 and Figure S7, Table S9). For amino acid derivatives and nucleic acids the signal intensity did not differ significantly between the MeOH, BUME, and IPA extracts. Finally, the aqueous phase from biphasic Matyash extraction was also analyzed in order to evaluate the polar metabolite profile when compared to the profiles that were derived from other single-step methods (rich in lipids). For the majority of polar metabolites, with the exception of nucleic acids, amines, and alkylamines, the signal from the aqueous phase (following the Matyash protocol) was significantly lower when compared to single-step methods ( Figure S8). Reproducibility and Size of Measurable Lipidome and Metabolome The analytical variability of the selected single-step extraction methods using IPA and BUME was evaluated through independent preparation of pooled plasma samples by four different operators (n = 5 samples per operator and per solvent). The intra-batch coefficient of variation (CV) across 20 replicates, analyzed in a randomized fashion, was determined for both complex lipids and polar metabolites. Figure 4 represents the lipid and polar metabolite count, depending on the coefficient of variation across all replicates and operators (Tables S10-S12). Metabolites 2020, 10, x FOR PEER REVIEW 7 of 17 Finally, the aqueous phase from biphasic Matyash extraction was also analyzed in order to evaluate the polar metabolite profile when compared to the profiles that were derived from other single-step methods (rich in lipids). For the majority of polar metabolites, with the exception of nucleic acids, amines, and alkylamines, the signal from the aqueous phase (following the Matyash protocol) was significantly lower when compared to single-step methods ( Figure S8). Reproducibility and Size of Measurable Lipidome and Metabolome The analytical variability of the selected single-step extraction methods using IPA and BUME was evaluated through independent preparation of pooled plasma samples by four different operators (n = 5 samples per operator and per solvent). The intra-batch coefficient of variation (CV) across 20 replicates, analyzed in a randomized fashion, was determined for both complex lipids and polar metabolites. Figure 4 represents the lipid and polar metabolite count, depending on the coefficient of variation across all replicates and operators (Tables S10-S12). For polar metabolites, out of the 116 metabolites detected in pooled plasma samples, 109 from the IPA extract had a CV < 30% vs. 115 metabolites from the BUME extract ( Figure 4B). using IPA and BUME. Red line indicates the threshold CV = 30% (data provided in Tables S10-S12). For polar metabolites, out of the 116 metabolites detected in pooled plasma samples, 109 from the IPA extract had a CV < 30% vs. 115 metabolites from the BUME extract ( Figure 4B). Following the filtering based on this analytical variability (CV < 30%), the size and diversity of the measurable lipidome and polar metabolome is shown in Figure 5 for the IPA extract, and in Figure S9 for the BUME extract. Table S6 for more information). Discussion In this study, we have evaluated the performance of several single-step protocols for the simultaneous extraction of complex lipids and polar metabolites from the human plasma in a highthroughput manner. The extractions were performed with solvents that cover the following range of polarity indexes H2O > MeOH > EtOH > IPA > BuOH > MTBE, with the BUME mixture expected to be in the middle of this range [24,25]. Therefore, the extraction affinity of BUME is likely driven by the interaction of both butanol and methanol solvent, with the resulting polarity effect for the more hydrophobic (lipids) and hydrophilic (polar metabolites) compounds. HILIC chromatography was chosen as the best compromise for performing complex lipid and polar metabolite analysis. It is important to note that different HILIC methods with different gradients and mobiles phases (see Materials and methods for details) were used for complex lipid and polar metabolite profiling, respectively. In addition to offering high chromatographic resolution for polar metabolite profiling, the HILIC approach is also advantageous for lipid analysis, since it allows for the separation of lipids classes based on their head groups. The separation by lipid class Table S6 for more information). Discussion In this study, we have evaluated the performance of several single-step protocols for the simultaneous extraction of complex lipids and polar metabolites from the human plasma in a high-throughput manner. The extractions were performed with solvents that cover the following range of polarity indexes H 2 O > MeOH > EtOH > IPA > BuOH > MTBE, with the BUME mixture expected to be in the middle of this range [24,25]. Therefore, the extraction affinity of BUME is likely driven by the interaction of both butanol and methanol solvent, with the resulting polarity effect for the more hydrophobic (lipids) and hydrophilic (polar metabolites) compounds. HILIC chromatography was chosen as the best compromise for performing complex lipid and polar metabolite analysis. It is important to note that different HILIC methods with different gradients and mobiles phases (see Materials and methods for details) were used for complex lipid and polar metabolite profiling, respectively. In addition to offering high chromatographic resolution for polar metabolite profiling, the HILIC approach is also advantageous for lipid analysis, since it allows for the separation of lipids classes based on their head groups. The separation by lipid class facilitates the development of quantitative methods, because each class can be covered by its corresponding internal standard. When compared to Reversed Phase (RPLC), the separation using HILIC also better corrects the matrix effects, since the endogenous lipids and their corresponding Internal Standards (IS) co-elute [26]. Importantly, HILIC separation also allows for the chromatography-assisted lipid quantification in large-scale population studies, due to the acceptable cost of a relatively low number of required internal standards. The aim of our study was to evaluate whether the use of a single-step extraction generates the lipid and polar metabolite coverage equivalent to the commonly used biphasic protocols. To this end, we evaluated the abundance and repeatability of the MS signal of all detected endogenous lipids and polar metabolites in human plasma extracts instead of estimating the extraction efficiency of each solvent with the recovery assessment using pre-and post-extraction internal standard spike. Because complex lipids bind to protein carriers (i.e., lipoproteins), a property that cannot be reproduced by an internal standard spike, our strategy to evaluate the extraction method performance using the signal of extracted endogenous metabolites was considered as the best compromise. We argue that the best strategy to quantify the recovery of endogenous compounds is by using reference materials due to the complexity of biological matrices in general [27,28]. This approach is commonly applied for the validation of the measurement accuracy of analytical methods and was out of the scope of this study. Complex Lipid Extraction The extraction of lipids depends on their structure and substitute groups (i.e., headgroups that are representative of each lipid class), their polarity (determined by the head group and length of the alkyl chain) and their spatial configuration (e.g., degree of unsaturation). These structural characteristics play an important role in the interaction with the extraction solvent. As previously reported for BUME and IPA extraction, in general, more than 90% of the total lipid content of a plasma sample is extracted, and it is not deemed necessary to perform a re-extraction of the pellet [18,19]. Among different lipid classes, the relative signal abundance for sphingolipids was higher in IPA and BUME extracts when compared to MTBE. This can be explained by the polar character of the head group in the sphingolipid structure ( Figure S10). The extraction of the sphingolipids with more polar character, such as lactosylceramides (LCer), was improved with more polar BUME mixture, owing to the lactosyl group. Conversely, in the case of hexosylceramides (HCER), significantly higher signal abundance was observed with IPA solvent as compared to MTBE and BUME (see Figure S11, showing the signal abundance of specific hexosylceramides and lactosylceramides with the same alkyl chain composition, across different solvents). Glycerolipids and mono-, di-, and tri-alkyl substituted glycerols followed a similar trend of solvent affinity, MTBE < IPA < BUME, although this was observed without a statistically significant difference among the tested extraction methods. The arrangement of fatty acid chains located in any of three positions (sn-1/sn-2/sn-3) in the glycerol backbone may play an important role in the extraction affinity of this lipid class [29]. The highest affinity for BUME was the most pronounced for MAGs and DAGs having lower hydrophobicity, due to only one or two fatty acyl chains and the exposition of the hydroxyl group. For cholesterol esters, which also have a highly lipophilic character similar to TAGs, no significant difference was observed between three extraction protocols, which could be due to the low intensity and high variability of signal due to their low electrospray ionization efficiency. The highest and the most reproducible response for these lipids was observed when using IPA as extraction solvent. In the case of glycerophospholipids, the extraction appeared to be primarily driven by their hydrophobic character (due to their alkyl chains), since all subclasses were most efficiently recovered with the MTBE and IPA. These results, showing the strong affinity of PIs for the MTBE, and PGs, PCs and PEs for all three solvents, are in agreement with the previous reports of Lee et al. and Matyash et al. [17,30]. Importantly, the performance of the single-step extraction with IPA was at the same level as the reference method with MTBE for all glycerophospholipid classes. Even more, a significantly higher signal for PSs was obtained following the extraction with IPA (see Figure 2). Finally, lysoglycerophospholipids, which are composed of a single fatty acyl chain, have lower hydrophobicity and, thus, their solubility mainly relies on the head group. Consequently, the extraction of lysophospholipids was more efficient with BUME and IPA. This effect was exacerbated for LPIs that were poorly recovered using the biphasic method with MTBE, thus limiting the LC-MS analysis of this class of lipids (i.e., signals close the level of detection). Polar Metabolite Extraction Large population studies require high-throughput methods, with minimal and highly reproducible sample preparation. Therefore, in this study, we investigated the viability of measuring polar metabolites from the same plasma extract that was obtained in single-step extraction used for the lipidome analysis. Polar metabolite analysis was performed using the optimized targeted HILIC-based methods (in acidic and basic conditions in positive and negative ionization mode, respectively) and methanol extraction was used as a reference for comparison [11,31]. Because BUME and IPA demonstrated the best performance for lipid extraction in a single step, we investigated their capacity to simultaneously extract polar metabolites implicated in central carbon metabolism (Table S9). Similar to lipid analysis, all of the polar metabolite classes were efficiently extracted by both methods. However, for the most polar classes, i.e., proteinogenic amino acids, carboxylic acids (mono-, di-, and tri-), and carbohydrate conjugates, the recovered signal was significantly lower while using IPA and BUME, as compared to methanol. Besides the hydrophilicity of these metabolites, the signal decrease following BUME and IPA extraction is also likely to be a consequence of ion suppression that is caused by co-eluting complex lipids (extracted with IPA and BUME) particularly in ESI positive mode, in the chromatographic conditions for polar metabolite analysis. Despite its decrease, the signal remained well defined and reproducible-a prerequisite for quantitative measurement. For the majority of other polar metabolites and specifically nucleosides and alkylamines a significantly higher signal was observed with IPA when compared to BUME and MEOH. We hypothesize that this may be due to the representative heterocyclic compounds. For acylcarnitines, the signal abundance varied, depending on the length of the acyl chain. Acylcarnitines are zwitterion molecules that are composed of a quaternary ammonium linked to a fatty acyl chain and their polarity varies, depending on the length of the acyl chain. The hydrophobic character of long-chain carnitines is evident, since the signal intensity was significantly higher in the IPA extract. For more polar short-chain acylcarnitines, the intensity of the recovered signal was also significantly higher in IPA extract but to a lesser extent. For both of these classes, methanol performance was significantly lower when compared to IPA. It is important to note that when applying biphasic methods for lipid extraction, polar metabolites were previously reported to be successfully measured from the aqueous layer [32]. Interestingly, our results clearly showed the significantly lower signal abundance for the majority of polar metabolites following the biphasic Matyash protocol-as a result of the analysis of aqueous phase (when compared to BUME, IPA, and MEOH extracts, Figure S8). This observation is likely due to the partition of polar metabolites between the aqueous and organic phase in the Matyash extraction. Extraction Method and Measurement Reproducibility In terms of lipidome coverage, the single-step lipid extraction using IPA or BUME efficiently recovered all lipid sub-classes, which was already reported using untargeted approaches for the comparison of Matyash, Folch, Alshery (1:1, BUME), and IPA protocols [19,[33][34][35]. Importantly, one-phase extraction using MMC solvent mixture (MeOH/MTBE/CHCl 3 ) showed significantly better extraction efficiencies for moderate and highly non-polar lipid species in comparison with biphasic Folch, Bligh and Dyer, and MTBE extraction systems [32]. While the size of detectable lipidome was extensively explored in several previous studies, the relative abundance and repeatability of lipid signal was rarely evaluated. When evaluating the size of measurable lipidome and polar metabolome we observed a difference in lipid signal variability between IPA and BUME extracts following the reproducibility test (CV < 30% across independently extracted replicates). This difference was mainly due to the higher variability of specific lipid signals recovered from BUME extracts, comprising TAGs (20), some phospholipids, free fatty acids, and cholesterol esters. We hypothesize that the presence of MeOH and thus the mixture of two solvents, methanol, and butanol (vs. pure IPA) could contribute to this difference in CVs [36]. In general, high CVs that were observed for TAGs were likely caused by ion suppression as a consequence of high degree of co-elution in the void volume (using HILIC chromatography). The difference in terms of signal reproducibility was not observed for polar metabolites, likely thanks to a significantly higher degree of separation of hydrophilic metabolites and, thereby, the lower degree of co-elution throughout the chromatographic gradient. The robustness of IPA and BUME extractions for lipidome analysis was previously assessed by independent studies using untargeted lipidomics approaches. For example, Calderón et al. compared the extraction methods while using CHCl 3 , MTBE, and IPA, after which IPA was reported as the most robust [37]. In another study, extraction protocols using MeOH, acetonitrile (ACN), IPA, IPA-ACN, CH 2 Cl 2 , CHCl 3 , MTBE, and Hexane were compared, and IPA was revealed as the best compromise to extract lipids [19]. In an additional study, where Folch, Matyash, and Alshery methods were tested, and the Alshery method was reported with high recoveries and lowest CV values for lipids [33]. All of these previous results highlight the differences in terms of signal reproducibility, depending on the lipid class and show that the most robust methods are the least biased single-step methods. Beside robustness, the application of single-step plasma extraction in human population studies [38] is also advantageous in terms of high throughput. For instance, BUME extraction was applied to the study of differences in lipidome composition in the Singaporean population where three main communities, Chinese, Indian, and Malays were characterized [39]. It was also recently used in large-scale plasma lipidomic profiling to reveal the associations between lipid levels and cardiometabolic risk factors [40]. When compared to BUME that has been used in lipidomics community for more than 12 years now, the potential of IPA extraction for lipidome analysis was only recently revealed and it was therefore less commonly applied in lipidomics studies. However, several recent human population phenotyping studies warrant its application [19,41,42]. Chemicals and Reagents Human pooled plasma samples were purchased from Sera Laboratories International Ltd. Trading as BioIVT (West Sussex, UK). A mixture of pooled male and female (40-60 years old) plasma was prepared and aliquoted for the extraction experiments. Metabolite Extraction Protocols Human blood plasma (25 µL) was extracted with MeOH (125 µL), EtOH (125 µL), BUtanol:MEthanol (BUME, 125 µL), or isopropanol (IPA, 125 µL) that were pre-spiked with the above indicated mixture of internal standards in order to evaluate the performance of different extraction solvents. For the biphasic MTBE extraction 10 µL plasma was extracted with MTBE:MeOH:H 2 O (750/225/188 µL) [43]. Because plasma is homeostatically regulated no normalization to protein amount is required [44]. All of the samples were vortexed and kept at −20 • C for one hour to facilitate protein precipitation. These extracts were then centrifuged for 15 min at 20,000× g at 4 • C and the resulting supernatants, from MeOH, EtOH, BUME, and IPA, were collected and transferred to LC-MS vials for injection. The upper phase resulting from biphasic MTBE:MeOH:H 2 O extraction was evaporated to dryness (in a vacuum concentrator LabConco) and re-suspended in 50 µL of IPA for lipid extraction evaluation in order to maintain the same sample to solvent ratio (1/5), as in the single step extraction. Prior to LC-MS analysis, the extracted pooled plasma samples were randomized per operator and extraction solvent. Protein quantification was performed on supernatant and plasma pellets in order to evaluate the efficiency of each solvent to precipitate proteins. The precipitation efficiency (Table S5) The linear gradient elution from 0.1% to 20% B was applied for 2 min, from 20% to 80% B for 3 min, followed by 3 min of re-equilibration to the initial chromatographic conditions. The flow rate was 600 µL/min, column temperature 45 • C, and sample injection volume 3 µL. Optimized ESI Ion Drive Turbo V source parameters were set, as follows: Ion Spray (IS) voltage 5500 V in positive mode and −4500 V in negative mode, curtain gas 35 psi, nebulizer gas (GS1) 50 psi, auxiliary gas (GS2) 60 psi, and source temperature 550 • C. Nitrogen was used as the nebulizer and collision gas. Optimized compound-dependent parameters were used for data acquisition in scheduled multiple reaction monitoring (sMRM) mode. Transitions for the entire panel of targeted lipids were optimized by SCIEX while using the Lipidyzer™ Platform [45]. The pooled plasma sample was first analyzed by MRM (non-scheduled) in order to obtain the retention time of each lipid class in the applied chromatographic system and added later to the scheduled MRM method (Tables S1 and S2). High-Coverage Targeted Metabolome Analysis For the polar metabolome analysis, the extracted plasma samples were analyzed by HILIC-MS/MS in both positive and negative ionization modes while using a 6495 triple quadrupole system (QqQ) interfaced with 1290 UHPLC system (Agilent Technologies, Santa Clara, CA, USA). In the positive mode, the chromatographic separation was carried out in an Acquity BEH Amide, 1.7 µm, 100 mm × 2.1 mm I.D. column (Waters, MA, USA). Mobile phase was composed of A = 20 mM ammonium formate and 0.1% FA in water and B = 0.1% formic acid in ACN. The linear gradient elution from 95% B (0-1.5 min) down to 45% B was applied (from 1.5 to 17 min) and these conditions were held for 2 min. Subsequently, initial chromatographic condition were maintained as a post-run during 5 min for column re-equilibration. The flow rate was 400 µL/min, column temperature 25 • C, and sample injection volume 2 µL. In negative mode, a SeQuant ZIC-pHILIC (100 mm, 2.1 mm I.D., and 5 µm particle size (Merck, Darmstadt, Germany) column was used. The mobile phase was composed of A = 20 mM ammonium Acetate and 20 mM NH 4 OH in water at pH 9.7 and B = 100% ACN. The linear gradient elution from 90% (0-1.5 min) to 50% B (8-11 min) down to 45% B (12-15 min). Finally, the initial chromatographic conditions were established as a post-run during 9 min for column re-equilibration. The flow rate was 300 µL/min, column temperature 30 • C, and sample injection volume 2 µL. ESI source conditions were set, as follows: dry gas temperature 290 • C, nebulizer 35 psi and flow 14 L/min, sheath gas temperature 350 • C and flow 12 L/min, nozzle voltage 0 V, and capillary voltage +2000 V and −2000 V in positive and negative mode, respectively. Dynamic Multiple Reaction Monitoring (DMRM) was used as an acquisition mode with a total cycle time of 600 ms. The MRM transitions were optimized from the direct analysis of pure chemical standards that were obtained from Sigma-Aldrich (The Mass Spectrometry Metabolite Library of Standards-MSMLS) (Tables S3 and S4). These transitions are publicly available in the Metlin-MRM spectral library [46]. Data (Pre)Processing Raw LC-MS/MS lipidome data were processed (i.e., peak extraction and alignment) while using the Multi Quant Software (version 3.0.3, Sciex, Framingham, MA, USA) and raw metabolome data was processed using the Agilent Quantitative analysis software (version B.07.00, MassHunter, Agilent technologies, Santa Clara, CA, USA). The data on peak height and peak area were extracted for each lipid and polar metabolite based on its extracted ion chromatograms (EICs) for the monitored MRM transitions. The peaks were filtered based on their presence (in 100% of replicates across all operators) and the intensity threshold (of 5 × 10 3 ion counts), and the obtained tables (containing peak areas of detected metabolites across all replicates) were exported to R version 3.5.1 software http://cran.r-project.org/ and RStudio version 1.1.463. For a quality control, the peaks were filtered based on their coefficient of variation calculated per solvent, while using the independent replicates analyzed across the entire run, when considering all four operators. Peaks with CV > 30% were removed from further analysis with the aim to determine the size of measurable and not only detectable lipidome and metabolome. Statistical Data Analysis R packages "tidyverse" and "ggplot2" were used to format the data and plot the figures, respectively. Fold changes were calculated while using the peak area values (no transformation and/or scaling was applied). GraphPad Prism 6 (GraphPad Software Inc., La Jolla, CA, USA) was used for statistical data analysis. One-way ANOVA was used in order to test the significance of signal abundance between replicates that were extracted with different solvents with an arbitrary level of significance p-value = 0.05. Conclusions Two single-step sample preparation methods, using IPA and BUME, were evaluated as the best compromise for the simultaneous and reproducible extraction of complex lipids and polar metabolites from human plasma. The relative signal abundance of complex lipids in IPA and BUME extracts was greater or equivalent to the signal that was recovered with the Matyash method using MTBE (commonly applied in lipidomics). Importantly, the MTBE showed limited performance for the extraction of lysophospholipids, and particularly lysophosphatidylinositols (LPIs). Although the breadth of coverage was the same for both single-step methods, the most robust lipid profiling was achieved following IPA extraction with the greatest number of profiled lipids with CV < 30%, and specifically TAGs. In addition to complex lipids, both IPA and BUME method extracted polar metabolites successfully, but less efficiently than methanol. These polar metabolites included proteinogenic amino acids and acylcarnitines, di-and tri-carboxylic acids, carbohydrates, and nucleosides. Based on the examined lipidome and polar metabolome coverage, extraction efficiencies, effectiveness of protein precipitation, and reproducibility, we conclude that both methods, IPA and BUME, are suitable for merged lipid and polar metabolite analysis in large-scale human population studies. Supplementary Materials: The following supplementary figures and tables are available online at http://www. mdpi.com/2218-1989/10/12/495/s1: Figure S1. Relative signal abundance of different lipid classes in MeOH, EtOH, IPA and BUME extracts of human plasma, Figure S2. Relative signal abundance (per lipid class) to signal recovered with biphasic extraction using MTBE, Figure S3. Signal abundance of internal standards (representing six lipid classes) spiked into plasma samples during single step IPA, BUME and biphasic Matyash extraction (organic MTBE phase), Figure S4. Retention of polar metabolites and complex lipids throughout the chromatographic gradient applied for complex lipid analysis using HILIC-MS/MS in positive ionization mode, Figure S5. Retention of polar metabolites and complex lipids throughout the chromatographic gradient applied for complex lipid analysis using HILIC-MS/MS in negative ionization mode, Figure S6. Background noise from blank extractions performed with (A) methanol, (B) Matyash extraction (organic MTBE phase), (C) IPA and (D) BUME, Figure S7. Relative signal abundance (per polar metabolite class) to signal recovered with MeOH extract, Figure S8. Relative signal abundance of polar metabolites detected in the aqueous phase of Matyash extraction compared to other extraction protocols (including methanol extract as a reference), Figure S9. Diversity and size of measurable lipidome and polar metabolome from BUME extract, Figure S10. Signal intensity of the sphingomyelins detected in the method, Figure S11. Signal intensity of specific hexosylceramides and lactosylceramides with the same alkyl chain composition in IPA, BUME and MTBE extracts, Table S1. MRM transitions of lipids in positive mode, Table S2. MRM transitions of lipids in negative mode, Table S3. MRM transitions of polar metabolites in positive mode, Table S4. MRM transitions of polar metabolites in negative mode, Table S5. Protein content per each solvent or solvent mixture, Table S6. Classification of polar metabolites, Table S7. Abundances (peak areas) of lipid species detected by the HILIC-MS/MS analysis of each solvent extract (organic MTBE phase from Matyash protocol, IPA and BUME) in positive ESI mode, Table S8. Abundances of lipid species detected by the HILIC-MS/MS analysis of each solvent extract (organic MTBE phase from Matyash protocol, IPA and BUME) in negative ESI mode, Table S9. Abundances (peak areas) of polar metabolites detected by the HILIC-MS/MS analysis of each solvent extract (methanol, IPA and BUME) in positive and negative ESI modes, Table S10. Abundances and CVs of lipid species detected by the HILIC-MS/MS analysis of selected single-step extractions (IPA and BUME) in positive ESI mode, Table S11. Abundances and CVs of lipid species detected by the HILIC-MS/MS analysis of selected single-step extractions (IPA and BUME) in negative ESI mode, Table S12. Abundances and CVs of polar metabolites detected by the HILIC-MS/MS analysis of selected single-step extractions (IPA and BUME) in positive and negative ESI mode.
9,132.2
2020-12-01T00:00:00.000
[ "Chemistry", "Medicine" ]
Multi-band Superconductivity in a misfit layered compound (SnSe)1.16(NbSe2)2 We report the discovery of superconductivity with Tc of about 5.3 K in a new misfit layered compound (SnSe)1.16(NbSe2)2. High resolution transmission electron microscopy and selected area electron diffraction of the single-crystalline samples clearly reveal the misfit layered structure. Based on the McMillan equation, the electron-phonon coupling constant λe – ph is estimated to be about 0.96, which is close to strong coupling range. The estimated out-of-plane and in-plane upper critical magnetic fields are 8.9 T and 15.5 T respectively. H c 2 ab (0) is 1.74 times the value of Pauli paramagnetic limit. Moreover, there is a positive curvature in the H c 2 ( T ) curves near Tc, indicating a feature of multi-band superconductivity. The temperature dependence of specific-heat in superconducting state can be fit by a two-band BCS model, which confirms further the multi-band superconductivity. The reduced charge transfer between the two subsystems may account for the enhanced Tc comparing with (SnSe)1.16(NbSe2). The physical properties of MCLs are dependent on the number of TX 2 layers (n) in one period of structure along the c-axis. For example, in (PbSe) 1.14 (NbSe 2 ) n , when n=1, the compound is non-superconducting, and the two subsystems are all orthorhombic. But when n=2 and 3, the compounds are both superconductors with T c of 3.4 K and 4.8 K respectively, and the subsystems are all monoclinic [5]. Similarly, (PbSe) 1.16 (TiSe 2 ) is not a superconductor but (PbSe) 1.16 (TiSe 2 ) 2 exhibits superconductivity with T c of 2.4-3.2K [11,12,20]. For more examples, T c of (LaSe) 1.14 (NbSe 2 ) and (LaSe) 1.14 (NbSe 2 ) 2 are 1.4K and 5.3 K respectively [6,9]. Generally speaking, superconductivity is enhanced with the increasing number (n) of the TX 2 layers in one periodic unit. It has been proposed that the charge transfer between the layers of two subsystems could play a crucial role in determining the properties of MCLs [1,11,21,22]. Charge transfer could bring the similar doping effects as chemical doping does. Angle resolved photoemission spectroscopy (ARPES) measurement of (PbSe) 1.16 (TiSe 2 ) n proved that the transferred charge is 0.26 e − and 0.074 e − per TiSe 2 for n=1 and n=2 respectively [22]. In comparison with the phase diagram of TiSe 2 , n=1 is in the overdoped regime while n=2 is very close to the optimally doping point. Although the NbSe 2 based-MLC (SnSe) 1.16 (NbSe 2 ) has already been reported [23] before and its superconductivity has been investigated recently [14], there is no report on (SnSe) 1.16 (NbSe 2 ) 2 with n=2 up to now. Here, we report the successful synthesis of (SnSe) 1.16 (NbSe 2 ) 2 single crystals and the discovery of superconductivity with an onset transition temperature of 5.3K. The misfit feature of structure is clearly revealed by high resolution transmission electron microscopy (HRTEM) and selected area electron diffraction (SAED). Similar to (SnSe) 1.16 (NbSe 2 ), (SnSe) 1.16 (NbSe 2 ) 2 also exbihit multi-band features in both the specific-heat and H c2 data. The in-plane upper critical magnetic field (H ab c2 (0)) exceeds the Pauli limit H P significantly, which may mainly result from the multi-band effects. Experiment details Single crystals of (SnSe) 1.16 (NbSe 2 ) 2 were prepared via chemical vapor transport (CVT) method with bromine as the transport agent. Sn(99.999%), Nb(99.99%), Se(99.999%) and SnBr 2 (99.999%) powders with a total mass of 1.8 g were mixed with the molar ratio of 1.16:2:5.16:0.44 and ground adequately, then sealed into an evacuated quartz ampoule with a length of 16 cm. The ampoule was heated in a two-zone furnace for 1 week, and the temperature of source zone and growth zone was controlled to be 900°C and 850°C respectively. Plate-like single crystals of (SnSe) 1.16 (NbSe 2 ) 2 with a size of 2-4 millimeters in the plane and a thickness of 0.04-0.1 millimeters were obtained. The room temperature x-ray diffraction (XRD) data of single-crystalline samples were measured by a PANalytical x-ray diffractometer (Empyrean) with a graphite monochromator and a Cu K α radiation. The chemical element molar ratio was confirmed by an energy-dispersive x-ray spectroscopy (EDX) affiliated with a GENESIS4000 EDAX spectrometer. The HRTEM and SAED images were taken at room temperature by employing an aberration corrected FEI-Titan G2 80-200 ChemiSTEM. The magnetic properties were measured on a magnetic property measurement system (MPMS-XL5, Quantum Design), and the specific-heat capacity was measured on a physical properties measurement system (PPMS-9, Quantum Design). The electrical transport properties were measured by using an Oxford Instruments cryostat with a He-3 probe and a 15T superconducting magnet. Results and discussion Figure 1(a) shows the EDX pattern of a (SnSe) 1.16 (NbSe 2 ) 2 single crystal. The molar ratio of SnSe and NbSe 2 is determined to be about 1.10(3):1, very close to the nominal composition. Moreover, it is found that about 4% Se are substituted by Br, which is unavoidably caused by the bromine transport agent. Such a halogen contamination also occurs in other MLCs such as (SnSe) 1.16 (NbSe 2 ) and (PbSe) 1.12 (TaSe 2 ) which were grown by the same CVT method [14,15]. Figure 1(b) shows the XRD pattern of a (SnSe) 1.16 (NbSe 2 ) 2 single crystal at room temperature, and the inset is a photograph of the typical as-grown single crystal. All the peaks in figure 1(b) are only (00l) peaks, indicating a single crystal feature. The HRTEM image clearly displays the usual stacking of the SnSe layer and the double NbSe 2 layers, and the different features between the two directions in the ab plane: incommensurate a-axis lattice constants between the two subsystems and the same periodicity along the b-axis of the two subsystems. The corresponding SAED patterns also reveals this difference: along the [010] direction, the SAED pattern contains two sets of reflections from the SnSe and NbSe 2 subsystems respectively, which are indicated by the red and green arrows; while along the [100] direction, the SAED pattern only contains one set of reflections due to the same periodicity of two subsystems. We calculated the lattice parameters of (SnSe) 1.16 (NbSe 2 ) 2 from the SAED patterns and summarized in table 1. The values of a and b are close to those of (SnSe) 1.16 (NbSe 2 ) for both subsystems [14], but the space group of two subsystems are both changed. We can obtained the incommensurate factor as 1+y=2a 2 /a 1 ≈1.16, where a 1 and a 2 represent the lattice constant along the a-axis for the SnSe and NbSe 2 subsystems respectively. In the SnSe subsystem, one unit cell contains one SnSe layer, and these layers form a monoclinic structure with a C2/m space group. Meanwhile in the NbSe 2 subsystem, one unit cell contains two NbSe 2 layers, and the stack mode of the two layers is similar to 2Ha-NbSe 2 , but there is an offset of b/6 between the two layers. The double-layer NbSe 2 subsystem also form a monoclinic structure with a C2/m space group. This structure is the same as in (PbSe) 1.16 (NbSe 2 ) 2 [5]. Figure 3(a) shows the temperature dependence of out-of-plane (r c ) and in-plane (ρ ab ) electrical resistivity for a (SnSe) 1.16 (NbSe 2 ) 2 single crystal. For both directions, the onset superconducting transition temperature T c is about 5.3 K (determined by 90% of normal state resistivity). Figure 3(b) shows the dc magnetic susceptibility as a function of temperature around T c under a magnetic field of 10 Oe applied along the ab-plane. The superconducting shielding volume fraction exceed 100% a little, which is due to the tiny influence of demagnetization factor. It should be noted that the behavior of r c and ρ ab is very different. In normal state, ρ ab (T) decreases monotonically with reduced temperature, i.e., a typical metallic behavior, but r c (T) shows a broad hump around 150 K. Similar metal-to-nonmetal crossover in r c has also been observed in other layered metals, such as CsCa 2 Fe 4 As 4 F 2 [24], Li x (NH 3 ) y Fe 2 Se 2 [25] and Tl 2 Ba 2 CaCu 2 O 8 [26]. The mainstream explanation for such a metal-to-nonmetal crossover is in terms of incoherent hopping between layers because of l c  d inter , where l c is the mean free path along the c-axis and d inter is the distance between two layers. Figure 3(c) shows the specific-heat around T c , plotted as C/T versus T 2 . From the linear fit by the equation C/T=γ+bT 2 , we can obtained that Sommerfeld coefficient γ=56.1 mJ mol −1 K −2 and lattice specific-heat coefficient β=6.03 mJ mol −1 K −4 . The reduced Sommerfeld coefficient γ/m=6.74 mJ mol −1 K −2 , where m is the number of atoms per formula unit. In 2Ha-NbSe 2 , γ/m=6.33 mJ mol −1 K −2 , which is close to our sample [27]. The Debye temperature is obtained, Θ D =138.94K, based on the formula Θ D =(12π b mR 5 4 ) 1 3 , where R is the gas constant. Moreover, the electron-phonon coupling constant λ e-ph is estimated to be 0.96 from the McMillan equation where μ * is the Coulomb pseudopotential, and an empirical value of 0.15 is used. Comparing with (SnSe) 1.16 (NbSe 2 ) (λ e-ph =0.80) [14], the λ e-ph of (SnSe) 1.16 (NbSe 2 ) 2 is enhanced. This value indicates that (SnSe) 1.16 (NbSe 2 ) 2 is close to strong coupling regime. Similar phenomena also occurs in (PbSe) 1.16 (TiSe 2 ) n (n=1, 2). The energy distribution curves of ARPES measurements also suggest that the electron-phonon coupling of (PbSe) 1.16 (TiSe 2 ) 2 (n=2) is much larger than that of (PbSe) 1.16 (TiSe 2 ) (n=1) [22]. A superconducting transition induced specific-heat jump is obvious around T c . Figure 3(d) shows the normalized electronic specific-heat C e /γT versus t, where t=T/T c . Such a temperature dependence of specific heat cannot be fit by a model of single BCS gap. We adopted a two-gap BCS model to fit the data, as the same as in (SnSe) 1.16 (NbSe 2 ) and other multi-band superconductors [14,27,[29][30][31][32]. From this fit, we obtained that 2Δ 1 =5.66 k B T c while 2Δ 2 =1.59 k B T c , and weight γ 1 /γ 2 =52%:48%. Comparing with (SnSe) 1.16 (NbSe 2 ) [14], the two gaps are both enhanced, and the weights are changed a little. These changes may be related to the change in the Fermi surface caused by the reduced charge transfer between the two subsystems. Furthermore, the normalized specific heat jump ΔC e /γT c is estimated to be 0.85, close to some other MLCs superconductors [2,13,14]. In order to study the effect of charge transfer on T c , we measured the Hall effect of (SnSe) 1.16 (NbSe 2 ) n (n=1, 2). The (SnSe) 1.16 (NbSe 2 ) single crystal is the same sample used in our previous work [14]. Figure 4(a) shows the temperature dependence of Hall coefficient of (SnSe) 1.16 (NbSe 2 ) (n1), (SnSe) 1.16 (NbSe 2 ) 2 (n2) . In 2Ha-NbSe 2 , the positive Hall coefficient is reduced and finally changes to be negative below 50K due to the CDW transition [33]. This phenomenon is disappeared in n1 and n2 because of the absence of CDW transition. In n1 and n2, the Hall coefficient is positive in all temperature range, indicating the predominant charge carrier is hole-type. Figure 4(b) displays corresponding carrier concentration of the two samples. Comparing with 2Ha-NbSe 2 [33], the electron-type charge transfer from the SnSe layers to the NbSe 2 layers decreases the holetype carrier concentration in the two MLCs. Comparing to n1, the charge transfer in n2 is reduced, hence the carrier concentration is enhanced. We summarize the relationship of T c and carrier concentration (50 K) of the three samples in the inset of figure 4(b). In 2Ha-NbSe 2 , most doping and intercalation will decrease its T c [34][35][36][37][38], and only pressure can increase its T c a little [39]. We speculate that the undoped 2Ha-NbSe 2 is very close to optimally doping point. Most cases of chemical doping, intercalation and/or the charge transfer in MLCs will turn NbSe 2 into the underdoped regime and thus reduce T c . Figure 5 shows the temperature dependence of ρ ab for a (SnSe) 1.16 (NbSe 2 ) 2 single crystal around T c under different magnetic fields. The field is applied along the c-axis (Figure 5(a)) and parallel to the ab-plane ( figure 5(b)) respectively. Figure 6 plots the temperature dependence of upper critical fields, H c ab 2 and H c c 2 , which are determined by the points at 50% of normal state resistivity (T c mid ). In both directions, the curves exhibit a positive curvature close to T c , which is usually regarded as a feature of multi-band superconductivity [14,29,30,[40][41][42]. Thus we adopt a two-band model to fit ( ) H T c2 [43]. where t=T/T c is the reduced temperature. a 0 =2(λ 11 λ 22 -λ 12 λ 21 ), a 1 =1+(λ 11 - The fitting parameters for the n2 sample are similar to those of n1. For one band, the intraband coupling constant is much greater than other band, but its interband coupling constant is smaller than the latter. This fitting results are further supported by the two-gap fit of the specific-heat data, indicating the multi-band superconductivity in (SnSe) 1 . Then x ab (0)≈6.08 nm and x c (0)≈3.49 nm are obtained. The Pauli paramagnetic limit (H P ) for the upper critical field is μ 0 H P =1.84 T c =8.9 T. Hence H c ab 2 (0)/H P =1.74, which is a large value among MLCs superconductors. As mentioned above, most MLC superconductors have small upper critical field H c2 (compared to the Pauli limit) in both directions. This value is also larger than that in the n1 sample (about 1.25) [14]. The multi-band effects may enhance H c ab 2 in both n1 and n2. Conclusion In summary, we synthesized a new misfit layered compound: (SnSe) 1.16 (NbSe 2 ) 2 , and discovered its superconductivity with T c of 5.3 K. The details of structure were revealed by powder HRTEM and SAED. Superconducting shielding volume fraction is close to 100%, confirming the bulk superconductivity. The reduced electron-type charge transfer could actually increase the hole-type charge carrier density in the conducting NbSe 2 layer and thus accounts for the enhanced T c comparing with (SnSe) 1.16 (NbSe 2 ). The Sommerfeld coefficient γ is 56.1 mJ mol −1 K −2 . The electron-phonon coupling constant λ e-ph =0.96. H c2 (0) is estimated to be 8.9 T and 15.5 T for the inter-plane and in-plane directions respectively. Especially for the inplane direction, the value of H c2 (0)/H P is about 1.74. Both the specific-heat and H c2 data suggest that (SnSe) 1.16 (NbSe 2 ) 2 is a multi-band superconductor. The multi-band effect may be the main reason for the significantly enhanced H c ab 2 (0).
3,588.2
2020-01-06T00:00:00.000
[ "Physics" ]
A capacity approach to box and packing dimensions of projections of sets and exceptional directions Dimension profiles were introduced in [8,11] to give a formula for the box-counting and packing dimensions of the orthogonal projections of a set $R^n$ onto almost all $m$-dimensional subspaces. However, these definitions of dimension profiles are indirect and are hard to work with. Here we firstly give alternative definitions of dimension profiles in terms of capacities of $E$ with respect to certain kernels, which lead to the box-counting and packing dimensions of projections fairly easily, including estimates on the size of the exceptional sets of subspaces where the dimension of projection is smaller the typical value. Secondly, we argue that with this approach projection results for different types of dimension may be thought of in a unified way. Thirdly, we use a Fourier transform method to obtain further inequalities on the size of the exceptional subspaces. 1 Introduction and main results Introduction The relationship between the Hausdorff dimension of a set E ⊂ R n and of its orthogonal projections π V (E) onto subspaces V ∈ G(n, m), where G(n, m) is the Grassmanian of mdimensional subspaces of R n and π V : R n → V denotes orthogonal projection, has been studied since the foundational work of Marstrand [15] and Mattila [16]. They showed that for Borel E ⊂ R n dim H π V (E) = min{dim H E, m} (1.1) for almost all m-dimensional subspaces V (with respect to the natural invariant probability measure γ n,m on G(n, m)) where dim H denotes Hausdorff dimension. Kaufman [13,14] used capacities to prove and extend these results and this has become the standard approach for such problems. There are many generalisations, specialisations and consequences of these projection results, see [6,18] for recent surveys. It is natural to seek analogous projection results for other notions of dimension. However, examples show that the direct analogue of (1.1) is not valid for lower or upper box-counting (Minkowski) dimensions or packing dimension, though there are non-trivial lower bounds on the dimensions of the projections, see [7,9,12]. It was shown in [8,11] that the box-counting and packing dimensions of π V (E) for a Borel set E are constant for almost all V ∈ G(n, m) but this constant value, termed a 'dimension profile' of E, had a very indirect definition in terms of the supremum of dimension profiles of measures supported by E which in turn are given by critical parameters for certain almost sure pointwise limits [8]. A later approach in [11] defines box-counting dimension profiles in terms of weighted packings subject to constraints. I was never very happy with these definitions, which are artificial, indirect and awkward to use. To make the concept more attractive and useful, this paper presents an alternative and more natural way of defining box-counting and packing dimension profiles in terms of capacities with respect to certain kernels. Then using simple properties of equilibrium measures we can find the 'typical' box or packing dimensions of π V (E), that is those that are realised for almost all V ∈ G(n, m), as a dimension profile of E. With little more effort, we can also obtain some upper bounds for the dimension of the exceptional V ∈ G(n, m) where the projection dimension is smaller than this typical value. Then, using Fourier transform methods, we will obtain new estimates on the dimension of the exceptional sets of V ∈ G(n, m) for box and packing dimensions when, roughly speaking, the dimension of E is greater than m. Thus in (2.4) we will define the s-box dimension profile of E ⊂ R n for s > 0 as where C s r (E) is the capacity of E with respect to the continuous kernel (2.1) (more precisely taking lower and upper limits will give the lower and upper dimension profiles). We will show in Section 2.2 that if s ≥ n then dim s B E is just the usual box-counting dimension of E. On the other hand, in Section 3.1, we show that if 1 ≤ m ≤ n − 1, then dim m B E equals the box-counting dimension of π V (E) for almost all V ∈ G(n, m). In this way, the dimension profile dim s B E may be thought of as the dimension of E when regarded from an s-dimensional viewpoint. Analogously, dim s H E = min{dim H E, s} might be interpreted as the Hausdorff dimension profile for the Hausdorff dimension result (1.1). By defining packing dimension profiles in terms of upper box dimension profiles in Section 4 we obtain similar results for the packing dimension of projections. In Section 5 we consider inequalities satisfied by the dimension profiles which help give a feel for the results. Since their conception, dimension profiles have also become a key tool for investigating the packing and box dimensions of the images of sets under random processes, see for example [4,21,24]. Main results on projections and exceptional directions Given the definitions of the dimension profiles which will be formally defined in (2.4), the basic projection results are easily stated. Essentially, the m-dimension profiles of E give the dimension of the projections of E onto almost all m-dimensional subspaces. Theorem 1.1 is the basic result on dimension of projections, and Theorems 1.2 and 1.3 concern the dimensions of the set of V ∈ G(n, m) for which the dimensions of the projections onto V are exceptionally small. We include the well-known Hausdorff dimension projection results for comparison which are directly analogous to the conclusions for box and packing dimension if we define dim s H E := min{s, dim H E} to be the Hausdorff dimension profile of E. Theorem 1.1. Let E ⊂ R n be a non-empty Borel set (assumed to be bounded in (ii) and (iii)). Then for all V ∈ G(n, m), with equality in all of the above for γ n,m -almost all V ∈ G(n, m). Part (i) of Theorem 1.1 goes back to Marstrand [15] and Mattila [16], and parts (ii)-(iv) were obtained in [8,11] but starting with the original cumbersome definitions of the box and packing dimension profiles. After relating capacities and box-counting numbers in Section 2, parts (ii) and (iii) will follow easily, and (iv) will come from the relationship between packing and box-dimension profiles discussed in Section 4. To put these estimates into context, dim m P E, etc. cannot be too small compared with dim P E. Indeed these bounds are sharp and there are identical inequalities for dim B and dim B , see Section 5 and [7]. Thus the almost sure dimensions of the projections are also constrained by these bounds. Whilst equality holds in Theorem 1.1 for γ n,m -almost all V ∈ G(n, m), dimension profiles can provide further information on the size of the set of V for which the box dimensions of the projections π V E are exceptionally small. Note that G(n, m) is a manifold of dimension m(n − m) so dim H G(n, m) = m(n − m) and it is convenient to express our estimates relative to this dimension. Theorem 1.2 gives estimates for the Hausdorff dimensions of the exceptional sets in terms of dim s B E, etc. when 0 ≤ s ≤ m and Theorem 1.3 gives estimates when m ≤ s ≤ n. Theorem 1.2. Let E ⊂ R n be a non-empty Borel set (assumed to be bounded in (ii) and (iii)) and let 0 ≤ s ≤ m. Then Noting that dim s B E, etc. increases with s, these bounds for the Hausdorff dimension of the exceptional sets of V decrease as s decreases. Using capacity ideas, all parts of Theorem 1.2 may be derived using minor modifications to the proofs for Theorem 1.1. Part (i) was first obtained by Kaufman [13] when he introduced the potential theoretic approach for the Hausdorff dimension of projections. Parts (ii)-(iv) were established in [8,11] using the earlier definitions of dimension profiles but the proofs here using capacities are rather simpler. The spirit of the next theorem is that if the dimension of E is significantly larger than that of the typical projection given by Theorem 1.1 then the exceptional set of V will be small. Theorem 1.3. Let E ⊂ R n be a non-empty Borel set (assumed to be bounded in (ii) and (iii)) and let 0 ≤ γ ≤ n − m. Then These estimates are expressed in terms of dim m+γ B E − γ, etc. since γ cannot easily be isolated from such expressions (except in case (i)). It follows from inequality (5.4) derived in Section 5 that dim m+γ Case (i) of Theorem 1.3 was established in [3], using Fourier transforms, see also [19]. We will use Fourier methods to obtain the box dimension cases (ii)-(iii), from which we will deduce (iv) . We remark that other recent delicate estimates have been given by [20] using Radon transform estimates and by [1,10] using ideas from additive combinatorics. Capacities and box-counting dimensions Throughout this section we will consider projections of a Borel set E ⊂ R n which we will take to be non-empty and bounded to ensure that its box dimensions are defined. Moreover, since the lower and upper box dimensions and the capacities of a set equal those of its closure, it is enough to prove our results under the assumption that E is non-empty and compact. Capacity, energy and dimension profiles Potential kernels of the form φ(x) = |x| −s are widely used in Hausdorff dimension arguments, see for example [13,14,17,19]. For box-counting dimensions, another class of kernels turns out to be useful. Let s > 0 and r > 0 and define the potential kernels φ s r (x) = min 1, originally introduced in [7,9]. Let E ⊂ R n be non-empty and compact and let M(E) denote the set of Borel probability measures supported by E. The energy of µ ∈ M(E) with respect to φ s r is defined by The capacity C s r (E) of E is the reciprocal of the minimum energy achieved by probability measures on E, that is since our kernels φ s r are continuous and E is compact, 0 < C s r (E) < ∞. For a general bounded set the capacity is defined to be that of its closure. The following energy-minimising property is standard in potential theory, but it is key for our development, so we give the short proof which is particularly simple for continuous kernels. Lemma 2.1. Let E ⊂ R n be non-empty and compact and s > 0 and r > 0. Then the infimum in (2.2) is attained by a measure µ 0 ∈ M(E). Moreover for all x ∈ E, with equality for µ 0 -almost all x ∈ E. Then µ k has a subsequence that is weakly convergent to some µ 0 ∈ M(E). Since φ s r (x − y) is continuous the infimum is attained. Suppose that φ s r (z − y)dµ 0 (y) ≤ γ − ǫ for some z ∈ E and ǫ > 0. Let δ z be the unit point mass at z and for 0 < λ < 1 let which contradicts that µ 0 minimises the energy integral on taking λ sufficiently small. Thus inequality (2.3) is satisfied for all x ∈ E, and equality for µ 0 -almost all x is immediate from (2.2). For s > 0 we define the lower and upper s-box dimension profiles of E ⊆ R n in terms of capacities: Capacities and box-counting numbers For a non-empty compact E ⊂ R n , let N r (E) be the minimum number of sets of diameter r that can cover E. Recall that the lower and upper box-counting dimensions or box dimensions of E are defined by with the box-counting dimension given by the common value if the limit exists, see for example [5] for a discussion of box dimensions and equivalent definitions; in particular the box dimensions of a set equal those of its closure. In this section we prove Corollary 2.4, that provided that s ≥ n the capacity C s r (E) and the covering number N r (E) are comparable. This is not necessarily the case if 0 ≤ s < n and it is this disparity that gives the formulae for the box dimensions of projections. The next two lemmas obtain lower and upper bounds for N r (E) in terms of energies or potentials. Lemma 2.2. Let E ⊂ R n be non-empty and compact and let r > 0. Suppose that there is a measure µ ∈ M(E) such that for some γ > 0 where c n depends only on n. In particular (2.7) holds if, for some s > 0, Proof. Let C(E) be the set of closed coordinate mesh cubes of diameter r (i.e. cubes of the form n i=1 [m i rn −1/2 , (m i + 1)rn −1/2 ] where the m i are integers) that intersect E; suppose that there are N ′ r (E) such cubes. Using Cauchy's inequality, noting that a set of diameter r can intersect at most (3 √ n) n of the cubes of C(E). Let E ⊂ R n be non-empty and compact and let s > 0 and r > 0. Suppose that E supports a measure µ ∈ M(E) such that for some γ > 0 where c n,s depends only on n and s. Summing over the x i , so, for some k with 0 ≤ k ≤ ⌈log 2 (M/r)⌉, the case of s > n coming from comparison with a geometric series. For all x ∈ E a volume estimate using the disjoint balls B(x i , r) shows that at most (2 k + 1) n ≤ 2 (k+1)n of the x i lie in B(x, 2 k r). Consequently each x belongs to at most 2 (k+1)n of the B(x i , 2 k r). Thus using that s ≥ n. Inequality (2.10) now follows from (2.11), (2.12) and that N r (E) ≤ a n N ′ r (E) where a n is the minimum number of balls in R n of diameter 1 that can cover a ball of radius 1. The comparability of box-counting numbers and capacities for s ≥ n now follows on combining the previous two lemmas. Corollary 2.4. Let E ⊂ R n be non-empty and bounded and let r > 0. Then , where E is the closure of E, the conclusion transfers directly to all non-empty bounded E. Equality of the box dimensions and the dimension profiles for s ≥ n is immediate from Corollary 2.4. Proofs of the projection results In this section we prove parts (ii) and (iii) of the theorems stated in Section 1.2 concerning the lower and upper box dimensions of projections. Parts (iv) on packing dimensions will follow from the relationships between packing dimension and upper box dimension and their dimension profiles which will be discussed in Section 4. Proofs of Theorems 1.1 and 1.2 parts (ii) and (iii) The upper bound for the dimensions of projections onto subspaces is an easy consequence of the way that the kernels behave under projections together with the relationship between box dimensions and capacities from Lemma 2.3. Proof of Theorem 1.1 (ii) and (iii) (inequalities). It is enough to obtain the upper bound when E is compact. Let V ∈ G(n, m) and r > 0. Since π V does not increase distances, φ m r (π V (x) − π V (y)) = min 1, For each r > 0 we may, by Lemma 2.1, find a measure µ ∈ M(E) such that for all where µ V ∈ M(π V E) is the image of the measure µ under π V , defined by g(w)dµ V (w) = g(π V x)dµ(x) for continuous g : V → R and by extension. . Taking lower and upper limits as r ց 0, we conclude that dim Note that a similar argument shows that for any Lipschitz map f : Almost sure equality in Theorem 1.1(ii) and (iii) is more or less a particular case of the corresponding parts of Theorem 1.2 so we combine the proofs. We first need a lemma to estimate the measure of the subspaces V onto which the projection of two given points are close to each other. We assume that the Grassmanian G(n, m) is equipped with some natural locally m(n − m)-dimensional metric d, and H t denotes t-dimensional Hausdorff measure on G(n, m), defined with respect to this metric. There is a number a n,m > 0 depending only on n and m such that φ m r (x−y) ≤ γ n,m V : |π V x−π V y| ≤ r ≤ a n,m φ m r (x−y) (x, y ∈ R n , r > 0). Proof. (a) Note that φ m r (x) is comparable to the proportion of the subspaces V ∈ G(n, m) for which the r-neighbourhoods of the orthogonal subspaces to V contain x, specifically, for all 1 ≤ m < n there are numbers a n,m > 0 such that φ m r (x) ≤ γ n,m V : |π V x| ≤ r ≤ a n,m φ m r (x) (x ∈ R n , r > 0). This standard geometrical estimate can be obtained in many ways, see for example [17,Lemma 3.11]. One approach is to normalise to the case where |x| = 1 and then estimate the (normalised) (n − 1)-dimensional spherical area of S ∩ {y : dist(y, V ⊥ ) ≤ r}, that is the intersection of the unit sphere S in R n with the 'tube' or 'slab' of points within distance r of some (n − m)-dimensional subspace V ⊥ of R n . Linearity then gives (3.1). (b) By Frostman's Lemma, see [16,19], there is a Borel probability measure τ supported on a compact subset of K and a > 0 such that where B G (V, ρ) denotes the ball in G(n, m) of centre V and radius ρ with respect to the metric d. This ensures that the subspaces in K cannot be too densely concentrated, and a geometrical argument gives for some a K > 0, see [16] or [19, (5.12)] for more details. Proof of Theorem 1.1 (a.s. equality) and Theorem 1.2, parts (ii) and (iii). Let As before we may take E to be compact. Suppose, for a contradiction, that H m(n−m)−(m−s) (K) > 0. By Lemma 3.1(b) there is a measure τ supported by K with τ (K) > 0 and satisfying (3.2). For µ ∈ M(E) and V ∈ G(n, m), write µ V for the projection of µ onto V defined by f (w)dµ V (w) = f (π V (x))dµ(x) for continuous f on V and by extension. Using Fubini's theorem, Applying (3.5) to each µ k and summing over k, Thus, for τ -almost all V there is a number M V < ∞ such that for all k, as the projected measures µ k V are supported by π V E ⊂ V . Hence lim r→0 log N r (π V E)/− log r ≥ t. This is so for all t < dim The inequality for the lower dimensions for almost all V follows in a similar manner, noting that it is enough to take r = 2 −k , k ∈ N when considering the limits as r ց 0 in the definitions of lower box dimension and lower box dimension profiles. Thus we have proved Theorem 1.2(ii) and (iii). Almost sure equality in Theorem 1.1(ii) and (iii) follows in exactly the same way by taking s = m, replacing τ by the restriction of γ n,m to K and using (3.1) at (3.5) to get a similar contradiction if γ n,m (K) > 0. ✷ We use a Fourier transform approach, analogously to the Hausdorff dimension case stated in Theorem 1.3(i), see [3]. Proof of Theorem 1.3 parts (ii) and (iii) We define the Fourier transform of a function f ∈ L 1 (R n ) and a finite measure µ on R n by with the definitions extending to distributions in the usual way. Fourier transforms of radially symmetric functions can be expressed as integrals against Bessel functions, see [19,Section 3.3], and in particular, for s > n, r > 0, the kernels φ s r on R n transform as distributions to φ s r (ξ) = c n s|ξ| −n−1+s r s ∞ r|ξ| J n/2 (u) u n/2−s−1 du (ξ ∈ R n , r > 0), where J n/2 is the Bessel function of order n/2 and c n depends only on n (this form follows from integrating the usual radial transform expression by parts). However, this oscillating transform is difficult to work with, so we introduce an alternative kernel ψ s r that is equivalent to φ s r and which has strictly positive Fourier transform. Thus for 0 < s < n and r > 0 we define ψ s r : R n → R + by the convolution where for convenience we write and also e r (x) := e x r = exp − 1 2 x r 2 (x ∈ R n , r > 0). In particular The following lemma summarises the key properties of ψ s r . Lemma 3.2. For 0 < s < n let ψ s r be as in (3.6). Then (a) there are constants c 1 , c 2 > 0 depending only on n and s such that there is a constant c 3 depending only on n and s such that ψ s r (ξ) = c 3 r s |ξ| s−n e(rξ) (ξ ∈ R n , r > 0). Proof. (a) By (3.7) it is enough to establish (3.8) when r = 1. Then where c = exp(− 1 2 ) and χ B(0,1) is the indicator function of the unit ball. By obvious estimates, writing v n for the volume of B(0, 1), if |x| ≤ 1 then J(x) ≥ 2 −s v n and if |x| > 1 then J(x) ≥ (2|x|) −s v n . The right-hand inequality of (3.8) follows for some c 2 > 0 when r = 1 and thus for all r > 0. For the left-hand inequality, fixing M > n, there is a constant c > 0 such that e(x) ≤ c 1 + |x| −M for all x ∈ R n , so from (3.10), Splitting the domain of integration of (3.11) into regions |y| ≤ 1 and |y| > 1 easily shows that the integral is bounded. Then splitting the domain into regions |y| ≤ 1 2 |x| and |y| > 1 2 |x| gives upper bounds of orders O(|x| n−s−M ) and O(|x| −s ) respectively, so a bound of O(|x| −s ) overall. Thus the left-hand inequality of (3.8) follows for a suitable c 1 when r = 1 and so for all r > 0. We now express energies with respect to the kernel ψ s r in terms of Fourier transforms. and e r (x − y)dµ(x)dµ(y) = c 6 r n e(rξ)| µ(ξ)| 2 dξ. (3.14) Proof. Intuitively (3.12) follows by applying Parseval's formula and the convolution formula. Justification of this requires some care, by first working with approximations to µ given by µ * δ ǫ where {δ ǫ } ǫ>0 is an approximate identity. However, the proof follows exactly that for the Reisz kernel | · | −s given, for example, in [19,Theorem 3.10]. Equation (3.13) then follows from (3.9). The identity (3.14) follows in a similar way. For each V ∈ G(n, m) we may decompose x ∈ R n as x = x V + x V ⊥ , where x V ∈ V and x V ⊥ ∈ V ⊥ , and where appropriate we will write x as (x V , x V ⊥ ) in the obvious way. Given µ ∈ M(E) we define Radon measures ν V on each V ∈ G(n, m) by for all continuous f on V and by extension. Then ν V is a weighted projection of µ onto V and the support of ν V is the projection of the support of µ onto V , in particular using the transform of the symmetric e(ξ V ⊥ ) and Fubini's theorem. Next we relate the transforms of the ν V to that of µ for each V . To enable us to integrate (3.17) over V we need a strightforward bound for the integral of exp(− 1 4 |ξ V ⊥ | 2 ). Lemma 3.5. Let W be an analytic subset of G(n, m) with H t (W ) > 0 where 0 ≤ (m − 1)(n − m) < t < m(n − m). Then there exists a Borel probability measure τ supported by W and a constant c 8 depending only on n, m and t such that for all ξ ∈ R n Proof. A consequence of (3.3) is that there exists a probability measure τ supported by W and c > 0 such that see [19, (5.11)]. Then since the integral with respect to λ is finite. Proof of Theorem 1.3. Let 0 < d < d ′ < dim m+γ B E. Then for each 0 < r ≤ 1 2 there is a measure µ r ∈ M(E) such that the energy where c is independent of r, using (3.13), (3.8) and (2.4). Let ν r V be the weighted projection of µ r onto V derived from µ r as in (3.15). Let W r ⊆ G(n, m) be the set Suppose, for a contradiction, that H t (W ) > 0. By Lemma 3.5 there is a probability measure τ supported by W satisfying For convenience write µ k = µ 2 −k and ν k V = ν 2 −k V . Then, using (3.17), Lemma 3.4,(3.20) and (3.18), By the Borel-Cantelli lemma H t (W ) = 0, a contradiction, so dim H W ≤ t. For all V / ∈ W , by (3.14), for all sufficiently large k, by (3.19). Since ν k V is supported by π V E, Lemma 2.2 implies that there is c ′ > 0 such that N 2 −k (π V E) ≥ c ′ 2 k(d−γ) for all sufficiently large k, so dim B (π V E) ≥ d−γ, since when finding box-dimensions it is enough to consider a sequence of scales r = 2 −k (k ∈ N). This is true for all 0 Packing dimensions In this section we show how the results for box-counting dimensions carry over to the packing dimensions. Packing measures and dimensions were introduced by Taylor and Tricot [22,23] as a type of dual to Hausdorff measures and dimensions, see [5,17] for more recent expositions. Whilst, analogously to Hausdorff dimensions, packing dimensions can be defined by first setting up packing measures, an equivalent definition in terms of upper box dimensions of countable coverings of a set is often more convenient in practice. Thus for E ⊂ R n we may define the packing dimension of E by since the box dimension of a set equals that of its closure, we can assume that the sets E i in (4.1) are all compact. It is natural to make an analogous definition of the packing dimension profile of E ⊂ R n for s > 0 by With this definition, properties of packing dimension can be deduced from corresponding properties of upper box dimension. Thus we get an immediate analogue of Corollary 2.5. With these definitions we can deduce the packing dimension parts (iv) of our main theorems from the corresponding upper box dimension parts (iii). For this we need the following 'localisation' property. Then F is compact and, since dim s P is countably stable, dim s P F > t and furthermore dim s Suppose for a contradiction that U is an open set such that F ∩ U = ∅ and dim s P (F ∩ U) ≤ t. As B is a basis of open sets we may find V ⊂ U with V ∈ B such that F ∩ V = ∅ and dim s which contradicts that V ∈ B. For a general Borel set E with dim s P E > t we need to find a compact subset E ′ ⊂ E with dim s P E ′ > t which then has a suitable subset as above. Whilst this is intuitively natural, I am not aware of a simple direct proof from the definition (4.2) of packing dimension profiles in terms of box dimension profiles. However the existence of such a set E ′ is proved in [11] using packing-type measures. In that paper, measures P s,d are constructed so that dim s P E = inf{d : P s,d (E) < ∞}. If dim s P E > t then P t,d (E) = ∞ and [11, Theorem 22] gives a construction of a compact E ′ ⊂ E with P t,d (E ′ ) = ∞, so that dim s P E ′ > t. The above argument can then be applied to E ′ . With the definitions of dim P and dim s P we can transfer the results on projections and exceptional sets from upper box dimensions to packing dimensions. Proof of part (iv) of Theorems 1.1, 1.2 and 1.3. If t > dim s P E we may cover E by a countable collection of compact sets E i such that dim s B E i < t. By Theorem 1.1(iii), for all V ∈ G(n, m), t by (4.1), so as this holds for all t > dim s P E, the inequality in Theorem 1.1(iv) follows. We next derive Theorem 1.2(iv) from Theorem 1.2(iii). Let 0 < s ≤ m and let t < dim s P E. By Proposition 4.2 we may find a non-empty compact F ⊂ E such that for every open U that intersects F , dim s P (F ∩ U) > t, so in particular dim is any cover of the compact set π V (F ) by a countable collection of compact sets, Baire's category theorem implies that there is an index k and an open set U such that This is true for all t < dim s P E, so the conclusion follows from taking a countable sequence of t increasing to dim s P E. The derivation of part (iv) of Theorem 1.3 from part (iii) is virtually identical, except at (4.3) we take and note that dim H W i ≤ m(n − m) − γ for each i. Finally, γ n,m -almost sure equality in Theorem 1.1(iv) again follows from part (iii) by the same argument, this time taking W i as in (4.3) with s = m and noting that γ n,m W i = 0 so that γ n,m W = 0. ✷ Inequalities A number of inequalities are satisfied by the dimension profiles; these were obtained for packing dimension profiles in[8, Section 6] but their derivation is more direct using our capacity approach. In particular inequality (5.2) may be written in three equivalent ways which give different insights into the behaviour of the profiles. If d(s) > 0 then (5.2) is equivalent to giving the Lipschitz form Proof. First note that it is enough to prove (5.1) and (5.2) for d(s) = dim s B E and d(s) = dim s B E. The analogues for d(s) = dim s P E then follow using the definition (4.1) of packing dimension profiles in terms of upper box dimension profiles. Note also that (5.3) and (5.4) come from simple rearrangements of (5.2). Inequality (5.1) is immediate from the definitions since from (2.1) φ s r (x) ≥ φ t r (x) if s ≤ t. For the right-hand side of (5.2) note that C s r (E) −1 = φ s r (x − y)dµ 0 (y) ≥ r s ∞ r |x − y| −s dµ 0 (y) for some x ∈ E, where µ 0 is an energy-minimising measure on E, and this last integral is bounded away from 0 for small r; taking lower or upper limits as r ց 0 gives the conclusion for box dimensions. For the left-hand side of (5.2) let 0 < r < R, 0 < s < t and d > 0. Then for µ ∈ M(E) and x ∈ E, splitting the integral and using Hölder's inequality, If C t R (E) ≥ R −d for some R then by Lemma 2.1 there is a measure µ ∈ M(E) such that the right-hand side of this inequality, and thus the left-hand side, is at most 2 for µ-almost all x, so C s r (E) ≥ 1 2 r −d/((1+(1/s−1/t)d) for the corresponding r. Letting R ց 0, it follows that where d(·) is either the lower or upper dimension profile, which rearranges to (5.3). Examples show that the inequalities (5.3) give a complete characterisation of the dimension profiles that can be attained, see [8,Section 6]. Setting s = m and t = n in inequalities (5.2) gives (1.2) along with similar inequalities for box dimensions, bounding the dimension profiles of E, and thus the typical dimensions of its projections, in terms of the dimension of E itself.
7,905.8
2019-01-30T00:00:00.000
[ "Mathematics" ]
An Iterative Neighborhood Local Search Algorithm for Capacitated Centered Clustering Problem The Capacitated Centered Clustering Problem (CCCP) is NP-hard and has many practical applications. In recent years, many excellent CCCP solving algorithms have been proposed, but their ability to search in the neighborhood space of clusters is still insufficient. Based on the adaptive Biased Random-Key Genetic Algorithm (A-BRKGA), this paper proposes an efficient iterative neighborhood search algorithm A-BRKGA_INLS. The algorithm uses shift and swap heuristics to search neighborhood space iteratively to enhance the quality of solutions. The computational experiments were conducted in 53 instances. A-BRKGA_INLS improves the best-known solutions in 23 instances and matches the best-known solutions in 15 instances. Moreover, it achieves better average solutions on multiple instances while spending the same time as the A-BRKGA+CS. I. INTRODUCTION Clustering problems have been applied to many research fields, such as machine learning, pattern recognition, community detection, image segmentation, genetics, microbiology, geology, remote sensing, etc. [1] [2]. This paper mainly studies the capacitated constrained clustering problem in the clustering problem, which is an abstract problem of location selection, and it is also an important decision-making problem. Scientific and reasonable location selection can effectively save resources, reduce costs and ensure high-quality service. The location selection problem has a wide range of applications in production, logistics, and daily life. One of the most famous facility-location problems is the -Median Problem [3]. This problem is defined as follows: Given points, medians are selected among them, and points are assigned to their nearest median so that the total distance between each point and its nearest median is minimized. Another classic location problem is the Capacitated -Median Problem (CPMP) [4], which has different applications in many practical situations. It can be described as follows: Given points that each has a known demand, find medians in points and assign each point to one median so that the total distance from demand points to their corresponding medians is minimized and the sum of the demands of all points assigned to a median cannot exceed its capacity. Due to capacity constraints, some points may not be assigned to the nearest median. The Capacitated Centered Clustering Problem (CCCP) [5] studied in this paper is a generalized CPMP, which divides demand points into clusters with limited capacity. The goal is to minimize the total distance between each point and the geometric center of its cluster. CCCP has been applied to many fields, such as the location design of garbage collection areas and sales centers [5], the network design of agricultural product supply chains [6], the site selection of offshore wind farms [7], and sibling reconstruction problem (SRP) in computational biology [8]. The main difference between CPMP and CCCP is the features of the locations of the centers. In the CPMP, the location is determined at a median point, while for the CCCP, the location is determined at a centroid. Compared with CPMP, the distance from the median to point in CPMP can be directly obtained from the initially constructed distance matrix. The difficulty of CCCP is that the distance from the cluster center to the point is constantly changing, and the distance calculation will consume more time. Since CPMP and CCCP are similar, researchers often propose solutions to CPMP and apply them to CCCP after making slight changes. In recent years, there have been many heuristic and metaheuristic solutions for CPMP and CCCP. Stefanello et al. [9] combined meta-heuristics based on local search with mathematical programming techniques. This method applied Iterated Reduction Matheuristic Algorithm (IRMA) to eliminate variables that are unlikely to be good or optimal solutions from the model. The simplified mathematical model was obtained. Baumann [10] presented an extended version of K-Means, which uses binary linear programming to assign points to clusters. In the latest paper of CPMP [11], the author proposes a decomposition strategy for solving large-scale instances. In the local optimization stage, the new IRMA reduction method is used, in which priority is given to sub-problems with great potential to increase the objective function value. The cluster with the largest unused capacity is selected as the initial cluster. This mathematical algorithm is extended to CCCP. The new and most famous solution was discovered in several CCCP instances. Mai et al. [12] used a Gaussian mixture modelling method to construct the solution of CPMP and proposed an improved heuristic. The improved heuristic uses the best improvement search mechanism to shift or swap points between different clusters. Meta-heuristic algorithms are divided into three categories: meta-heuristics with exact approaches, metaheuristics with other meta-heuristic components, and metaheuristics with local search heuristics. Jánošíková et al. [13] considered the efficiency of the integer programming solver, combined genetic algorithm with integer programming and proposed two variants. The integer programming solver is used to generate elite individuals in the solving process of the genetic algorithm or as a post-processing technique to improve the best solution of CPMP. Chaves and Lorena [14] combined a simulated annealing algorithm with Clustering Search (CS) [15] proposed by Oliveira et al. to solve the CCCP. Chaves and Lorena [16] first used a genetic algorithm to generate solutions and then enhanced the quality of solutions by CS. Muritib [17] proposed a random best-fit construction method and a local search heuristic algorithm based on Tabu Search (TS). Considering CS is the most computationally demanding procedure, Melo Morales et al. [18] parallelized the local search component CS, using the Genetic Algorithm as a solutions generator, to solve the CCCP. Caballero Morales [19] proposed a genetic algorithm, which combines the Greedy Random Adaptive Search Process (GRASP) and the K-Means clustering algorithm. Recently, Chaves et al. [20] proposed an adaptive Biased Random-Key Genetic Algorithm (A-BRKGA) by improving the Biased Random-Key Genetic Algorithm (BRKGA) [21] and provided a local search component CS to intensify the exploitation of CCCP solutions. This method determined new best solutions for seven classic instances and reported best solutions in other new instances. Most researches for CCCP focus on improving the evolution process in the meta-heuristic algorithm, such as [20] improving the meta-heuristic framework by adding parameter control. Literature [14], [16] and [17] respectively combine simulated annealing algorithm, genetic algorithm and tabu search with local search. [18] proposes the strategy of parallel local search component. The related research on the local search for a specific solution of CCCP is relatively shallow, such as literature [17] and [20] are both using simple search methods. A targeted local search can effectively improve the quality of solutions. Our method focuses on the design of local search schemes. The variable neighborhood search algorithm (VNS) is an improved local search algorithm that uses multiple neighborhood structures defined by different functions to perform alternate searches, achieving a good balance between search concentration and evacuation. The variable neighborhood search is based on the following facts: (1) The local optimal solution of a certain neighborhood structure is not necessarily the local optimal solution of another neighborhood structure. (2) The global optimal solution must be the local optimal solution of all possible neighborhoods. VNS algorithms mainly depend on the neighborhood structure, search mechanism, and neighborhood movement strategy. Usually, the search order between neighborhood structures is sure. Common search mechanisms include the first or best improvement search strategy. Once the first improvement strategy detects an improved solution, it is set as a new existing solution. The best improvement strategy selects the best of all improved solutions as the new solution. Neighborhood movement strategies mainly include returning to the first neighborhood, searching in the same neighborhood, and searching for the next neighborhood. The literature [22] reported the impact of different combinations of VNS on the solution quality when used as a local search. The variable neighborhood search algorithm has a good effect in solving travelling salesman problems, location problems, vehicle routing problems, etc. At present, researchers have also proposed a variety of improved variable neighborhood algorithms, such as general variable neighborhood search (GVNS) and skewed general variable neighborhood search (SGVNS) [23]. In addition, related literature such as [24] and [25] adopts particular strategies and mechanisms to retain and deal with infeasible solutions to explore infeasible space. Exploring greater solution space and increasing the diversity of solutions improves the possibility of finding a better solution. The initialization and evolution rules of the population in A-BRKGA [20] can produce many solutions with different structures. Thus, the diversity of solutions is ensured. Through experimental analysis, we found that some instances in the data sets in [20] have many clusters with a few points, and the assignments of points at the edge of these clusters are highly variable. The local search component (CS) has less exploration in the neighborhood space between clusters. Based on A-BRKGA [20], this paper proposes an iterative neighborhood search algorithm called A-BRKGA_INLS to enhance the exploration of neighborhoods. The main innovations of this paper include: (1) The algorithm separately searches the shift neighborhood and the swap neighborhood, instead of searching for feasible shifts and swaps simultaneously. The iterative variable neighborhood search method better balances the concentration of the search and the evacuation of the genetic algorithm. Based on the optimal shift neighborhood, the swap neighborhood is checked again. The shift neighborhood has more adjustment space and can develop in a better direction. (2) The algorithm combines imprecise search and precise search. The imprecise search in the evolution process has higher conditions for point movement. It skips the movement that can only cause a weak increase in the objective function, avoiding the early fixation of the cluster allocation and losing more exploration opportunities, thereby effectively avoiding falling into the local optimum prematurely. The precise search checks the shifts and swaps ignored by the imprecise search, which further improves the quality of solutions. The experimental results in 53 instances show that the proposed method has good performance. Compared with A-BRKGA+CS, A-BRKGA_INLS can find better solutions on multiple instances in approximately equal time. The remainder of the paper is organized as follows. In Section II, a formal description of CCCP is given, and the basic idea of BRKGA is introduced. Then, based on BRKGA, the application of A-BRKGA to CCCP is described. In Section III, the algorithm A-BRKGA_INLS for CCCP is given. In Section IV, we introduce the experimental dataset. In Section V, we report the results of the computational analysis. Finally, in Section VI, the paper is summarized, and the future research direction is prospected. A. MATHEMATICAL MODEL OF CCCP CCCP can be formally expressed as an optimization problem, as shown in Equation (1). ∈  , ∈ , ∈ { , }, ∀ , ∀ is the set of demand points; is the set of cluster centers; is the coordinate of point ;  is the coordinate of the geometric center of cluster ; = 1 , if the point i is assigned to cluster , and = 0 otherwise; is the demand of the point ; is the capacity of cluster . The objective function (1) minimizes the total distance between each cluster centers and their assigned points. Constraints (2) require that each point must be assigned to one cluster. Constraints (3) impose that the sum of demands of all points in one cluster should not exceed the cluster capacity. Decision variables are defined. B. BIASED RANDOM-KEY GENETIC ALGORITHM BRKGA is a general search meta-heuristic algorithm proposed by Goncalves and Resende [21], which is based on the Random-Key Genetic Algorithm (RKGA) introduced by Bean [26]. Recently, BRKGA has been combined with many combinatorial optimization problems, such as the Permutation Flow-shop Scheduling Problem [27], the Twostage Capacitated Facility Location Problem [28], the Network Hubs Location Problem [29], and the Vehicle Routing Problem [30]. In BRKGA, each gene on the chromosome is a decimal number in the interval [0, 1]. The number is called the random-key. N random-keys form a vector that represents a chromosome. The random-keys vector needs to be converted into a solution for the combinatorial optimization problem by a specific decoder to calculate its fitness. The initial population of BRKGA consists of randomkeys vectors (individuals). All random-keys in each vector are generated independently and randomly. In each generation of population evolution, first, the fitness of the newly created random-keys vectors are calculated by the decoder. Then, the population is divided into an elite group and a non-elite group by fitness. Next, individuals of the next generation are generated, as shown in Figure 1. (1) elite individuals are directly copied into the next generation. (2) A small number of mutants are added to the next generation. The mutation in BRKGA is different from that in genetic algorithms. These mutants are generated the same 4 VOLUME XX, 2017 way as the individuals in the initial population. (3) To make up a population of size , − − individuals need to be produced. These individuals are generated by parameterized uniform crossover [31]: two individuals (parents) are selected randomly from the elite and the nonelite groups. It can be seen that BRKGA is an evolutionary algorithm that performs multiple iterations on random-keys vectors. C. ADAPTIVE BIASED RANDOM-KEY GENETIC ALGORITHM FOR SOLVING CCCP Chaves et al. [20] proposed A-BRKGA based on BRKGA and designed a special decoder for CCCP. This paper applies A-BRKGA to evolve populations and uses the same CCCP decoder. We will briefly introduce the encoding and decoding of CCCP instances and the difference between BRKGA and A-BRKGA. For each CCCP instance, points are numbered from 1 to (the number of points in the instance). In this way, one point can be indexed by a serial number. A random-key in the interval [0, 1] is randomly generated for each point. Besides, two random-keys are generated for the +1th and +2th positions. Therefore, the code length of CCCP is +2. CCCP decoder is based on the best-fit construction method [17]. The last two random-keys of the encoding related to the perturbation probability and crossover probability do not participate in the decoding process. First, the first random-keys are sorted in descending order, and the last two random-keys remain unchanged. Then, the first m demand points of the random list are put into m clusters as the initial center of the cluster, and the remaining − demand points are placed according to the best-fit principle. (The best-fit principle is always selecting the cluster closest to the demand point to join when meeting the capacity constraint.) The cluster center and occupied capacity are updated whenever a new point is added. When all the points are added to the appropriate cluster, a feasible solution is formed. The decoder will calculate the solution fitness, using the objective function (Equation (1)) as the fitness calculation equation. Compared with BRKGA, A-BRKGA adds perturbation strategy of elite individuals and deterministic and adaptive parameters update strategy. The perturbation strategy of elite individuals perturbs individuals with the same fitness in the elite group. The perturbation strategy increases the diversity of the elite group and avoids falling into the local optimum prematurely. Deterministic parameter update is reflected in controlling the proportion of elite individuals and mutant individuals in the population. As the number of iterations increases, the proportion of elite individuals κ increases, and the proportion of mutant individuals κ decreases. A-BRKGA inserts two random-keys at the positions +1 and +2 of the vector to make the parameters self-adaptive. Perturbation probability β and parameterized uniform crossover are updated based on the two random-keys. The updates of β and are both non-deterministic. Each perturbed chromosome has its perturbation probability. Adaptive parameter update allows the gene sequence to be changed to different degrees, which improves the possibility of producing solutions with different structures. III. THE CAPACITATED CLUSTER CENTERED PROBLEM OF A-BRKGA In this section, we will present A-BRKGA_INLS in detail. First, we provide a general framework of the algorithm, showing how the two local search algorithms are combined with A-BRKGA. Then, the specific process of imprecise Iterative Neighborhood Local Search Algorithm (IINLS) is explained. Finally, the precise Iterative Neighborhood Local Search Algorithm (PINLS) is introduced, and the difference between IINLS and PINLS is discussed. Algorithm 1: A-BRKGA_INLS; //A-BRKGA with iterative neighborhood local search Input: the number of generation max_gen, the size of population , the search internal K, a null solution * which its objective function value is set to INFINITY; Output: the local optimal solution * * ; (1) Randomly generate initial population Pop with p vectors; (2) for i =1 to max_gen do We utilized A-BRKGA to evolve the population. In the process of population evolution, elite individuals are clustered at certain generations, and an Iterative Neighborhood Local Search (IINLS) is performed on the individuals with the highest fitness in each cluster. Another Iterative Neighborhood Local Search (PINLS) is performed when the population evolution is finished to improve the search quality. PINLS has the same framework as IINLS, but the criteria for moving or not moving is different. The primary process of A-BRKGA_INLS is listed in Algorithm 1. Firstly, the initial population of size is generated randomly (line 1). Secondly, in each generation, the population is updated by copying elite individuals, perturbing similar elite individuals, mutation, and crossover (line 3). Then, for every particular generation (K), the elite individuals are clustered by the Label Propagation (line 5). IINLS is applied to individuals with the highest fitness in each cluster to improve the quality of individuals (line 7, see Section III, Part B). The current best solution is stored in * (line 8). When the maximum number of generations (max_gen) is reached, the population evolution is finished. At this time, a precise local search algorithm PINLS is implemented to get the final best solution * * (line 12,see Section III, Part C). The algorithm ends. Compared with A-BRKGA+CS [20], A-BRKGA_INLS has two differences. (1) CS simultaneously searches for feasible shifts and swaps, and IINLS separately searches shift neighborhood and swap neighborhood to thoroughly explore the neighborhood space of solutions. (2) When the evolution is completed, A-BRKGA+CS no longer searches, while A-BRKGA_INLS applies PINLS to improve individuals' quality further. Figure 2 shows a simple example to introduce the scenario our algorithm aimed, where lines connect points in the same cluster. It is required to divide 9 points into 3 clusters with a capacity constraint of 4. Table I shows the coordinates and points demand. f denotes the value of the objective function. For the initial solution in Figure 2(a), we use the precise iterative neighborhood search (see (b)(c) for the process) and A-BRKGA_INLS combined with the imprecise iterative neighborhood search (see (d)(e)(f) for the process)) to solve. The initial solution = 66.3. 1 8 20 1 2 10 10 1 3 11 36 1 4 13 21 1 5 16 27 1 6 19 32 1 7 20 14 1 8 28 29 1 9 32 13 1 Perform the precise iterative neighborhood search. First, search for the shift neighborhood of the current solution in increasing order of cluster labels. Traverse cluster 1, point 2 is moved to cluster 2, and is reduced by 0.37. Figure 2(b) shows the updated solution. Continue to search the shift neighborhood of Figure 2(b). Traverse cluster1, point 1 is moved to cluster 2, and is reduced by 0.55. The solution is updated, as shown in Figure 2(c). Search for cluster1, cluster 2, and cluster 3 in Figure 2(c). At this time, there is no better shift neighborhood solution. The shift neighborhood search ends. Because the clusters have no intersection, the swap neighborhood will not be searched. The current solution reaches the local optimum with f= 62.4. However, if an imprecise iterative neighborhood search is performed, since < 0 (Equation 5), these two moves will not be performed. Until cluster 3 is traversed, point 4 in cluster 3 is moved to cluster 1 according to = 5.18. Then search for the shift neighborhood of Figure 2(d), and move point 3 in cluster 1 to cluster 3 according to = 2.24, as shown in Figure 2(e). The imprecise search reaches the local optimum with = 52.2. That is to say, in this case 2(a), the result of precise iterative neighborhood search is only slightly improved. Due to capacity constraints or distance factors, the subsequent move that leads to a more significant improvement cannot be performed. However, the imprecise search can move across these points that cause a slight improvement. It is beneficial to the final solution. FIGURE 2. Example of neighborhood search in A-BRKGA_INLS. It can be seen that A-BRKGA_INLS is suitable for the situation where the precise search for slight improvement is easy to fall into the local optimum early. In A-BRKGA_INLS, the imprecise search is used to improve the solution more, and then the precise search is used to improve the solution that is already good enough. By their combination, A-BRKGA_INLS explores the neighborhood space more fully as a whole. Subsequent chapters will detail the neighborhood iteration process of A-BRKGA_INLS. B. IMPRECISE ITERATIVE NEIGHBORHOOD LOCAL SEARCH ALGORITHM (IINLS) This paper uses the Label Propagation algorithm [32] to cluster the elite individuals, aiming at clustering the elite individuals with high similarity in the same cluster. Only the individual with the highest fitness in each cluster is selected for local search to speed up the evolution process (Individuals that already have been searched are not considered). Label Propagation uses the Pearson correlation coefficient [33] to measure if two random-keys vectors could return similar solutions. Applying clustering search to each generation of the population usually does not help with getting a better solution. On the contrary, it will shorten the number of generations and fail to get good performance. Therefore, we only perform the clustering search for every K generations. A point in cluster A is transferred to cluster B when the capacity of cluster B is not less than the sum of the occupied demand and the demand of the transferred point. The move is called a shift. The solution obtained by shifting on solution S is called a shift neighborhood solution of solution S. The shift neighborhood of solution ( 1 ( ) ) comprises all possible shift neighborhood solutions. In one instance with demand points and clusters, each demand point can be shifted to other m-1 clusters. Therefore, the number of neighborhood solutions contained in 1( ) does not exceed n×m. A point in cluster A is transferred to cluster B, and another point in cluster B is transferred to cluster A when the capacity of cluster A and cluster B is not less than the occupied demand plus the demand of the point swapped in minus the demand of the point swapped out. The move is called a swap. The solution obtained by swapping on solution S is called a swap neighborhood solution of solution . The swap neighborhood of solution ( 2 ( )) comprises all possible swap neighborhood solutions. Each point can be swapped with any point only if they are not in the same cluster. Therefore, the number of neighborhood solutions contained in 2( ) does not exceed 2 . This paper proposes a new iterative neighborhood local search method, IINLS, which combines shift neighborhood and swap neighborhood. The specific search processes of 1 and 2 are described in Algorithm 2 and Algorithm 3. In Algorithm 2 and Algorithm 3, whether to shift or swap is not determined by the precise objective function increment. For swaps, we use the evaluation function in [17] (Equation (4)), in which u and v are the points to be swapped, and are the original cluster centers, and and are new cluster centers after swapping. For shifts, we only make a simple judgment (Equation (5)). We utilize an auxiliary data structure to record the geometric centers of all clusters so that ∆ can be calculated in O (1). Swaps or shifts are performed when ∆ > 0 is satisfied. The swap neighborhood is larger than the shift neighborhood. To reduce the search scope of swap neighborhoods, we only check points in the overlapping areas. Figure 3 describes an example of three clusters where clusters A and C are not overlapping and so swaps between the two clusters are not considered. We utilize auxiliary data structures to store the boundary coordinates of clusters A, B, and C. The minimum coordinates P1 (min_x, min_y) and the maximum coordinate P2 (max_x, max_y) of the overlapping rectangular area can be calculated. For instances having many points in a single cluster, checking only points in the overlapping area can speed up the search of swap neighborhoods. has no improved solution in 2 ( ) . If the solution obtained from the shift neighborhood has improved in the swap neighborhood, it is necessary to go back to the shift neighborhood to search again. The stop criterion is that there is no feasible move in both the shift neighborhood and the swap neighborhood of the current solution. At this time, the solution reaches a local optimum, and its fitness is calculated (lines 8,9). The algorithm uses the array 'Changed' to mark whether clusters have been changed to update the overlapping areas. The Boolean named is to record whether the solution has been improved. The search process does not change the random-keys vector, so IINLS does not affect the evolution of the individual in A-BRKGA. IINLS utilizes two evaluation functions to check feasible shifts and swaps, so the judgement of moves may be imprecise. This search process is called the imprecise search. Imprecise search improves the movement conditions, avoids missing significant growth due to slight growth, and further explores better neighborhood solutions, which effectively avoids the evolution falling into local optimum prematurely. (1) = ; (2) for ← 1 to do /*m is the number of clusters to be partitioned.*/ C. PRECISE ITERATIVE NEIGHBORHOOD LOCAL SEARCH ALGORITHM (PINLS) At the end of population evolution, a precise local search algorithm for the current best solution is executed (PINLS, Algorithm1 line 12). In PINLS, the objective function increment is accurately calculated to determine whether the shift or swap is feasible. The search process is called the precise search. Since there are only a few points in each cluster, a shift or swap causes a significant change in the cluster center. However, evaluation functions only consider the point being moved and ignore the influence on the remaining points in two clusters. Figure 4 shows an example that even if the condition ∆ f > 0 is satisfied, the movement will degrade the quality of the solution. It illustrates the importance of precisely calculating the objective function increment at the last step. As shown in Figure 4(a), the light gray points are in cluster 1, and the dark gray points are in cluster 2. Before shifting, the centers of clusters 1 and 2 are points 1 and 2 (triangle markers), respectively. It can be seen that the distance from point P to 1 is greater than the distance to 2 , which satisfies the condition ∆ > 0 (Algorithm 2 line 8). If the imprecise shift is performed, point P will be moved to cluster 2, and the centers of clusters 1 and 2 will become points 1' and 2', as shown in Figure 4(b). As a result, the sum of distances in two clusters increases, and the objective function increases. If we can accurately calculate the objective function increment, point P will not be shifted to cluster 2. Precise calculation avoids the degradation of the quality of the solution. PINLS keeps the same process as IINLS but calculates the objective function increment caused by swaps and shifts instead of using evaluation functions (in Algorithm2 line 6, 7, Algorithm3 line 7, 8, ∆f is not used). This step detects feasible shifts and swaps ignored by imprecise search due to the higher move conditions, improving the quality of solutions at a low time cost. The solution must undoubtedly be better or remain unchanged because PINLS accurately calculates the objective function. IV. EXPERIMENTAL DATA SETS A-BRKGA_INLS is coded in C++ and tested on Intel (R) Xeon (R) Platinum 8269CY 2.5 GHz processor with 8 GB of RAM. The parameter setting of A-BRKGA is the same as [20]. We use nine CCCP data sets in reference [20] to compare the proposed method with A-BRKGA + CS [20]. Nine data sets contain 53 CCCP instances. In Table II, there are three data sets named TA, doni [5], and sjc [34] (including 20 instances). Table III contains the data set SJCn proposed by Pereira, Lorena, and Senne [35] for the Maximum Coverage Location Problem (including eight instances) and five data sets (named lin318-m, u724-m, rl1304-m, pr2392-m, and fnl4461-m) for CPMP generated by Stefanello, de AraúJo, and Müller [36] (including 25 instances). All of the above instances are from (https://sites.google.com/site/antoniochaves/publications/da ta). This paper does not compare data set x-n-m [36]. The data set is based on the Capacitated Vehicle Routing Problem, and the upper limit of cluster capacity is tight. In the experiment, it is found that many illegal solutions that do not meet the capacity constraints are generated. Iteration of a large-scale population takes much time, so we abandon the data set. Table II and Table III show the essential characteristics of instances: number of points ( ), number of clusters ( ), cluster capacity ( ), average point demand ( _ ), and point demand standard deviation ( _ ). In Table III, each row represents multiple instances with the same number of points, and the number of clusters in column m corresponds to the cluster capacity in column . In data set SJC-n, there are a few points whose demands exceed the cluster capacity. For this situation, we delete these demand points and reduce the number of clusters of this instance accordingly. V. COMPARISON OF EXPERIMENTAL RESULTS First, the experimental results are compared between A-BRKGA_INLS and A-BRKGA+CS in the latest literature [20]. Then, comparative experiments were carried out from two perspectives to illustrate the superiority of our algorithm: the combination of imprecise search and precise search and iterative neighborhood search. Table IV and Table V show the results of A-BRKGA+CS and A-BRKGA_INLS. The computational tests limit the running time in 1000s, and each instance was run continuously 20 times with different random seeds. The entries in the table are the best-known solution (best-known), the best solution ( * ), the average solution ( ) over 20 runs, the average running time to find the best solution ( * ), the average running time of the instance ( ) in seconds, the absolute difference between the best solution and the bestknown solution ( ), the deviation between the best solution and the average solution ( ), and the difference between the best solutions of the two algorithms ( ). represents the gap between the current algorithm solution and the best-known solution, and reflects algorithm stability. We use bold to mark solutions that are better than the best-known solutions. Data for each entry is averaged to compare the overall performance. The average data are placed in the last row. A. ALGORITHM COMPARISON In Table IV, A-BRKGA_INLS and A-BRKGA+CS have the same ability to obtain the best solutions ( = 0), and the average gap between the best solutions and the bestknown solutions is both 0.5% ( = 0.5). Our method gets equal or better average solutions in 17 of 20 instances. Compared to A-BRKGA+CS ( = 0.29), our method provides better robustness ( = 0.20). In Table V, A-BRKGA_INLS improves 23 best-known solutions and matches four best-known solutions. The quality of A-BRKGA_INLS solutions is 2.16% higher than A-BRKGA+CS solutions ( = 2.16), and the gap with the best-known solution has decreased from = 1.36 to = −0.75. Especially on data sets pr2392 and fnl4461 with thousands of points, the quality of our solutions has been significantly improved, exceeding 20% at the most. Besides, our method can get equal or better average solutions in 27 instances. In Table IV, A-BRKGA_INLS ( = 1.06 ) shows stronger robustness than A-BRKGA+CS ( = 1.12). We also apply the Wilcoxon signed-rank test (WSR) to analyze if a significant difference exists between the solutions of A-BRKGA_INLS and A-BRKGA+CS. The WSR shows -= 0.001 . The result indicated that there is a significant differences between A-BRKGA_INLS and A-BRKGA+CS,and INLS provides better solutions than CS. It can be seen from Table IV and Table V that A-BRKGA_INLS has no significant increase in running time compared to A-BRKGA+CS. In summary, A-BRKGA_INLS provides stronger robustness and better results than A-BRKGA+CS. A-BRKGA_INLS scans the neighborhood better, so it produces high-quality solutions. We also compare A-BRKGA_INLS with A-BRKGA to show the efficiency of the local search components INLS. Results are listed in Table VI. Random seeds affect the searched solutions through evolutionary process. In order to study the effect of random seeds on the best solution, we increased the number of consecutive runs to 50. The current best solution was recorded every five times. As shown in Figure 5, each recorded data is compared with the best result of 20 consecutive runs ( * of A-BRKGA_INLS in table V). In the u724_010 and u724_030 instances, A-BRKGA_INLS obtained the best solution in the 0-5th, 10th-15th time, respectively. In the u724_075, u724_125 and u724_200 instances, A-BRKGA_INLS obtained better solutions than those reported in Table IV in the 10-15th, 20-25th, 35-40th time, respectively. It can be seen that affected by the random seed, A-BRKGA_INLS can perform better as the number of runs increases. In addition, literature [11] applied the proposed method GB21 MH to CCCP and gave experimental results. Table VII shows the comparative experimental results of A-BRKGA_INLS and GB21 MH . Experiments show that our method obtains a better solution in eight instances, although the solution of GB21 MH in some instances is greatly improved. B. EFFECTIVENESS ANALYSIS OF IMPRECISE SEARCH The final precise search method can significantly improve the quality of solutions. However, it does not mean that applying precise search method to evolution can also produce excellent solutions. To prove it, we compare A-BRKGA_INLS with the method that not using the imprecise search. In Table VIII, the second set of data shows experimental results of not using the imprecise search in 33 CCCP benchmark instances. (In each column, the three sets of data represent A-BRKGA_INLS, not using the imprecise search and not using iterative neighborhood in turn. The values in bold are better than the best-known solutions, and the values marked with * are better than A-BRKGA_INLS.) Each instance was run 20 times continuously. As shown in Table VIII, not using the imprecise search outperforms the best-known solutions in 15 instances but outperforms A-BRKGA_INLS solutions in only six instances. In most instances, the best solutions of not using the imprecise search are worse than those of A-BRKGA_INLS. Not using the imprecise search can get some excellent solutions using iterative neighborhood, but it is not as excellent as A-BRKGA_INLS in the overall result. Moreover, the average running time to find the best solution for not using the imprecise search ( * = 65.4 ) is much less than A-BRKGA_INLS ( * = 562.15 ). Not using the imprecise search reaches local optimum within tens of seconds in most instances, and results are not improved at the later stage of evolution. The reported results support the claims that the imprecise search effectively avoids the algorithm falling into local optimum prematurely and help the algorithm yield highquality solutions. C. EFFECTIVENESS ANALYSIS OF ITERATIVE NEIGHBORHOOD Both IINLS and PINLS iteratively search the shift neighborhood and the swap neighborhood, instead of searching for both shifts and swaps simultaneously. In other words, IINLS and PINLS search the shift neighborhood until there is no feasible shift and then go to the swap neighborhood to perform feasible swaps. Since the CCCP is strictly limited by capacity, the order of shifting and swapping is very important. After a large number of shifts, INLS searches swaps that cannot be converted into two shifts due to capacity constraints. Based on the optimal shift neighborhood, INLS searches swaps to make the swap neighborhood optimal. In INLS, the swap neighborhood does not affect the shift neighborhood. The shift neighborhood has more adjustment space and can develop in a better direction. However, if shifts and swaps are searched alternately frequently, the solution is not optimal in any neighborhood in the whole process, except at the end. In order to prove that iterative neighborhood search provides better results than simultaneous search, we conducted a comparative experiment. The third set of data in each column of Table VIII shows the experimental results of not using iterative neighborhood. Similarly, each instance was run 20 times continuously. Not using iterative neighborhood utilizes the A-BRKGA to evolve population but performs two local search algorithms that simultaneously search for swaps and shifts. Not using iterative neighborhood combines imprecise search and precise search, the same as A-BRKGA_INLS. In 13 instances, not using iterative neighborhood finds better solutions than the best-known solutions, but these solutions are not as excellent as A-BRKGA_INLS solutions. In one instance, the solution outperforms A-BRKGA_INLS but is worse than the best-known solution. In four instances (bold and marked with *), not using iterative neighborhood outperforms both the best-known solutions and A-Author Name: Preparation of Papers for IEEE Access (February 2017) VOLUME XX, 2017 11 BRKGA_INLS solutions. Solutions of other instances are poor. As shown in the last row of Table VIII, the average value of the best solutions of A-BRKGA_INLS ( ( * ) = 391639.35) is much smaller than that of not using iterative neighborhood ( ( * ) = 414047.50). In summary, A-BRKGA_INLS iteratively searches for the neighborhood space of solutions, which has more advantages than not using iterative neighborhood on these data sets. Therefore, iterative neighborhood search is effective. VI. CONCLUSION This paper presents an optimized local search algorithm based on A-BRKGA to solve the CCCP. Unlike local search processes in the previous literature, A-BRKGA_INLS iteratively searches the shift neighborhood and the swap neighborhood. Until there are no viable shifts in the shift neighborhood, the swap neighborhood is searched. Thus, A-BRKGA_INLS explores the neighborhood of solutions more fully. In the stage of population evolution, evaluation functions are used to perform the imprecise local search to avoid the search falling into the local optimum prematurely. When the population evolution is completed, precise calculations are adopted to improve the quality of solutions further. The performance of the algorithm was tested on a general benchmark containing 53 instances. Experimental results show that the algorithm is effective for solving CCCP. Based on 53 instances, 23 new bestknown solutions are provided, and 15 instances match the current best-known solutions. Compared with A-BRKGA + CS, the difference in time cost between the two algorithms is small. Measuring the performance of the overlapping areas is our next research issue. Future research can focus on heuristic algorithms for data sets with tight cluster capacity limits such as x-n-m, so the algorithm can fully explore the neighborhood space on the premise of constructing a small number of feasible solutions. Second, multi-objective CCCP can be another research direction to expand the CCCP model to a wider range of applications.
9,074.8
2022-01-01T00:00:00.000
[ "Computer Science" ]
W7Ni3Fe-Ti6Al4V bimetallic layered structures via directed energy deposition ABSTRACT Bimetallic structures of Ti6Al4V-W7Ni3Fe were fabricated via directed energy deposition (DED)-based additive manufacturing (AM). Our research demonstrates the ability of DED-based AM to control Ti6Al4V-W7Ni3Fe bimetallic structures with tailorable mechanical and thermal performance. The thermal conductivity of the bimetallic structures was three times higher than Ti6Al4V at 300°C. Uniaxial compression along the transverse direction showed a failure strain of 63% compared to pure Ti6Al4V, while the longitudinal direction showed a failure strain of only 37% of Ti6Al4V. Variable hardness was observed throughout the sample due to diffusion of elements and intermetallic phase formations. Scanning electron microscopy revealed that the interfaces in the as-printed samples were crack-free with elemental gradients. Introduction Multi-material structures with varying functionality can offer a unique solution to engineering problems compared to single-material structures.One common type of multi-material structure is metal-ceramic composites, such as SS316L + TiB 2 , which have been shown to improve the mechanical properties of the matrix material (Ang, Sing, and Lim 2022).Another type of multi-material structure is bimetallic layered structures, composites with unique and tailorable properties for various applications such as ballistics and aerospace.One type of alloy commonly used in these applications is tungsten heavy alloys (WHAs), a material composed of 90-97 wt% W with lower melting point elements such as Cu, Fe, Ni, or Co (Bose et al. 2018).In physical terms, these alloys consist of unmelted W particles held together by a lighter and ductile element matrix (Şahin 2014).Performance-wise, the combination of the body-centered cubic (BCC) tungsten phase and the face-centered cubic (FCC) metallic matrix gives the WHAs outstanding properties such as high density (16-18 g/cm 3 ), high strength (1000-1700 MPa), high ductility (10-30%), high thermal conductivity, and good corrosion resistance (Srikanth and Upadhyaya 1984).Due to these properties, WHAs have been widely applied in nuclear plants for radiation shields, defense, vibration dampers, and various aerospace applications (Cai et al. 1995;Park et al. 2001;Ryu and Hong 2003;Sunwoo et al. 2006;Liu et al. 2008).Liquidphase sintering (LPS) is the most common method to process the WHAs.The WHA processed via LPS showed higher ductility than that processed by solidstate sintering (SSS) (Gurwell et al. 1984).However, there are still some disadvantages of the WHAs processed via LPS.The main disadvantage is the weak particle-particle interaction between each tungsten particle which could be the source of crack initiation even in well-sintered WHA materials (Gurwell et al. 1984).Other disadvantages of using LPS to process WHAs are the lack of flexibility and the cost.It has been reported that using LPS to process WHAs requires a long sintering time at high temperatures (typically ∼ 60 min at ∼ 1500°C ) to keep melting the feedstock materials to promote diffusion and densification.The sintering time can even be longer for larger components (German, Suri, and Park 2009).Although some advanced sintering techniques, such as spark plasma sintering (SPS) and microwave sintering (MS), have shown time-related cost reduction in processing the WHAs, the density of the processed WHAs by SPS and MS is lower than the one processed via the LPS method.Furthermore, preparing samples with homogeneous microstructure for large parts with complex geometry is challenging using these sintering techniques (Zhou et al. 2021).Due to these drawbacks, researchers have been gravitating toward using various additive manufacturing (AM) techniques to create dense WHA parts in a time-efficient manner (Li et al. 2020). Directed energy deposition (DED) is an AM method popular for large structures and parts with compositional variations.The laser-based DED uses a laser as energy input to melt the powder feedstocks on a metallic substrate.The raster scan motion allows the previous molten region to experience rapid solidification.The final structure processed by the DED method could have a near-net shape from the digital design and is achieved by repeating the laser raster scan and layer-by-layer processing.Moreover, many DED systems have multiple powder feeders and real-time control over the processing parameters such as laser power, laser scan speed, and powder feed rate, enabling multi-material fabrication in one operation.The coaxial powder deposition feature also improves the powder deposition efficiency.These advantages make DED one of the optimal ways for manufacturing WHAs, particularly with W7Ni3Fe (Wei et al. 2021;Zhou et al. 2021;Ye et al. 2022).Furthermore, these advantages allow for the capability to process multi-material structures such as Inconel 718/ GRCop 84 copper alloy, Ti6Al4V/ SS 410, Ti6Al4V/Al12Si, Ni-Ti, SS 316L/Al12Si, Inconel 718/Ti6Al4V, and Inconel 718-W7Ni3Fe (Onuike and Bandyopadhyay 2018, 2019, 2020;Zhang andBandyopadhyay 2019, 2021;Afrouzian et al. 2022;Farzaneh et al. 2022;Groden et al. 2022).Due to the powder-based DED system, there are virtually endless bimetallic compositions in addition to those listed above that can be made with any custom alloy (Bandyopadhyay et al. 2022).The DED-processed multi-material structures showed enhanced mechanical properties compared to the single-material structures.Further, the DED's processing time is lower than traditional sintering methods.The materials processed via the DED also showed homogenous microstructure across the parts. One alloy that would greatly benefit from the thermal properties of WHAs is Ti6Al4V, which is one of the primary alloys used in aerospace applications due to its high strength-to-weight ratio, temperature resistance, corrosion resistance, and fatigue strength (Rodriguez et al. 2015;Denti et al. 2019;Liu and Shin 2019).Furthermore, the high weldability of Ti6Al4V makes it one of the optimal alloys to be used for metal AM (Liu and Shin 2019).In this work, a bimetallic system of Ti6Al4V and W7Ni3Fe was manufactured via DED and then underwent mechanical and thermal testing.The microstructure of the DED processed Ti6Al4V/W7Ni3Fe alloy bimetallic layered structure was characterised by the scanning electron microscope (SEM) and energy-dispersive spectroscopy (EDS).Mechanical and thermal properties such as microhardness, compressive strengths along longitudinal and transverse directions, and thermal diffusivity were measured on the DED-processed bimetallic layered structures.Each section was processed with different number of layers, to prove the consistency of DED method in processing materials at different scales. Materials and methods 2.1.DED-based AM processing of Ti6Al4V-W7Ni3Fe bimetallic layered structures Spherical Ti6Al4V (ASTM B348-13, Grade 23, Oxygen < 0.10%, TEKNA TM , Québec, Canada) and W7Ni3Fe alloy powders (TUNGSTEN PARTS WYOMING, WY, USA) were used as the feedstock materials.The W7Ni3Fe was an agglomerated powder consisting of spherical particles composed of fused W, Ni, and Fe particles.This morphology is shown in Figure 1.A mechanical sieve shaker was utilised to sieve the powders for 15 min, and the powders with particle sizes ranging from 45 to 150 µm (−100/+325 mesh) for Ti6Al4V and 15-45 µm for W7Ni3Fe were collected for the best printing results.A DED system (FormAlloy, Spring Valley, CA) equipped with an optic fiber laser source (500 W maximum) was utilised to fabricate the Ti6Al4V/W7Ni3Fe alloy bimetallic layered structure.The DED system had two powder feeders, allowing the process to produce a structure with multiple materials.In this case, the W7Ni3Fe alloy powder was loaded into powder feeder 1, and the Ti6Al4V powder was loaded into powder feeder 2. Additionally, a Ti6Al4V metal substrate (Tiger Metals Group, Los Angeles, CA) with a thickness of 3 mm was used.In the preparation stage, the working chamber of the DED system was purged with Ar gas to reduce the O 2 level below 20 ppm to prevent oxidation during the laser processing.The H 2 O level was controlled below 10 ppm.Prior to the processing of the bimetallic structures, process optimisation was done with each composition to identify optimum processing parameters with minimum defects while maintaining a layer height that is in line with the layer thickness based on the tool-path. Figure 2a illustrates the design of the DED processed Ti6Al4V-W7Ni3Fe bimetallic layered structure.The designed structure was composed of the W7Ni3Fe alloy section and Ti6Al4V section alternating.The in-fill laser scan orientation was set as 0 and 90 degrees (Figure 2b). Figure 2c demonstrates the schematic of the DED processing of the Ti6Al4V-W7Ni3Fe bimetallic layered structure.Both cylindrical shape (12.7 mm diameter) and square shape (13 mm × 13 mm, length × width) Ti6Al4V-W7Ni3Fe structures were fabricated.Additionally, the Ti6Al4V-W7Ni3Fe structures with a different number of layers in each section, such as 5 layers of W7Ni3Fe + 5 layers of Ti6Al4V alternating, 5 layers of W7Ni3Fe + 10 layers of Ti6Al4V alternating, 10 layers of W7Ni3Fe + 10 layers of Ti6Al4V alternating and 20 layers of W7Ni3Fe + 20 layers of Ti6Al4V alternating, were built to verify the performance of the processing parameters.Table 1 shows the details of the optimised processing parameters to build each section of the Ti6Al4V-W7Ni3Fe bimetallic.Specifically, a W7Ni3Fe alloy section ((W7Ni3Fe) sec. 1) was initially deposited using a laser power of 270 W, a powder feed rate of 48.6 g/mm, and a laser scan speed of 600 mm/min.The layer thickness of the first (W7Ni3Fe) section was set as 0.22 mm.After the first W7Ni3Fe alloy section was done, a Ti6Al4V (sec. 1) section was deposited on top of the (W7Ni3Fe) 1 section by utilising a 300 W laser power, 17.3 g/min powder feed rate, and 800 mm/min laser scan speed.The layer thickness of the Ti6Al4V section (sec. 1) was set as 0.35 mm.When the deposition of the Ti6Al4V section (sec. 1) was finished, another set of the W7Ni3Fe and Ti6Al4V sections, noted as W7Ni3Fe sec. 2 and Ti6Al4V sec.2, were deposited on the top by using the corresponding processing parameters.The hatch distance was set as 0.417 mm for all the sections. Characterisation A low-speed diamond saw (MTI, Richmond, CA) was utilised to cut the as-fabricated samples for exposing the cross-section.The sectioned samples were mounted with phenolic molding powder for the surface finishing.Surface finishing was then applied to the exposed cross-sections.The sectioned samples were ground by sandpapers with grits from 200 to 2000, then polished with Al 2 O 3 suspension with particle sizes from 1 µm to 0.05 µm.Samples were immersed in a 100% ethanol solution and cleaned by an ultrasonicator for 20 mins. Microstructure and phase analyses Kroll's reagent (92 mL of DI water, 6 mL of HNO 3 , and 2 mL of HF) was used to reveal the microstructure of Ti6Al4V-W7Ni3Fe bimetallic structure.The sample was fully immersed in Kroll's reagent for 30 s. SEM obtained the microstructures at the cross-section of the fabricated bimetallic layered sample with both a secondary electron (SE) and a backscattered electron (BSE) detector.In addition, the elemental distribution at the samples' cross-section was analyzed using energy-dispersive spectroscopy (EDS).X-ray diffraction (XRD) analysis was performed on the cross-section of the DED fabricated Ti6Al4V-W7Ni3Fe bimetallic structures using a Siemens D 500 Kristalloflex diffractometer.Cu-K α was selected as a radiation source, a 2θ from 30 to 80 degrees and 0.05-degree step size were applied for analysis. Microhardness test The microhardness data at the cross-section was obtained from a microhardness tester (Phase II, NJ).At least ten indentations were applied across the interface and at the same depth.The hardness value at each depth was averaged based on the resulting measurements.The hardness test used a testing load of 1.96 N (HV 0.2 ) and a dwell time of 15 s. Thermal diffusivity test The thermal diffusivity measurement of the DED processed Ti6Al4V-W7Ni3Fe bimetallic structures was performed using a Netzsch LFA 447 NanoFlash® thermal diffusivity system.In addition, specimens of Ti6Al4V substrate, DED processed pure W7Ni3Fe alloy, and DED processed pure Ti6Al4V were also tested to compare the results with the Ti6Al4V-W7Ni3Fe bimetallic structures.For all specimens, the thermal diffusivity measurements were obtained at 25°C and 50-300°C with 50°C increments.Three thermal diffusivity measurements were completed at each temperature point. Compression test The Ti6Al4V-W7Ni3Fe bimetallic layered structure was tested using a SHIMADZU AG-1S (50 KN) screw-driven universal testing machine.The bimetallic specimens for the compression tests were prepared with a square base (6 mm × 6 mm, length × width) and approximately 13 mm in height.The pure specimen dimensions varied slightly, with the samples having a height of ∼ 9 mm and side lengths that were ∼ 5 mm, which was to ensure that the tester did not exceed the 50 KN limit.The displacement rate for all the samples was 0.33 mm/min.At least 3 samples were tested under each test condition.Moreover, the specimens with both longitudinal and transverse orientations were tested. Results and discussion Figure 3 shows the images of the as-fabricated Ti6Al4V-W7Ni3Fe bimetallic samples and their cross-sections processed via the DED with different designs in the number of layers in each section.Specifically, Figure 3 (a1) and (a2) show the sample's images of as-fabricated Ti6Al4V-W7Ni3Fe samples and cross-sections of the design having 5 layers of W7Ni3Fe + 5 layers of Ti6Al4V alternating.Figure 3(b1) and (b2) show the sample's images of the as-fabricated Ti6Al4V-W7Ni3Fe sample and its cross-section of the design with 10 layers of W7Ni3Fe + 5 layers of Ti6Al4V alternating.Figure 3(c1) and (c2) show the sample's images of the as-fabricated Ti6Al4V-W7Ni3Fe sample and its cross-section of the design with 10 layers of W7Ni3Fe + 10 layers of Ti6Al4V alternating.Figure 3(d1) and (d2) show the sample's images of the as-fabricated Ti6Al4V-W7Ni3Fe sample and its cross-section of the design with 20 layers of W7Ni3Fe + 20 layers of Ti6Al4V alternating.No visible defects or delamination were found in any samples, which proved the reliability of processing the Ti6Al4V-W7Ni3Fe bimetallic samples in different scales via the DED technique.The processing parameters selected for the W7Ni3Fe section and the Ti6Al4V section were optimised based on a series of experiments.To optimise the processing parameters for the W7Ni3Fe section, a laser power range from 250 W to 300 W was tested with a fixed powder feed rate of 48.6 g/min, a fixed laser scan speed of 600/min, and a fixed layer thickness of 0.25 mm.The results showed that the deposition could not form a W7Ni3Fe structure when the laser power was too low (∼ 250 W) or too high (∼300 W).Using a laser power of 270 W gave the best print result to build the pure W7Ni3Fe.However, the laser was out of focus after a certain number of layers were deposited with a laser power of 270 W, indicating incorrect layer thickness.After the layer thickness was adjusted to 0.22 mm, a tall W7Ni3Fe structure could be fabricated without laser focusing issues.The processing parameters for the Ti6Al4V section were obtained from the previous study (Afrouzian et al. 2022).The DED processed Ti6Al4V-W7Ni3Fe bimetallic samples were fabricated by applying the 'direct deposition' build strategy, which means that there is no functionally graded zone and only a small transition interface between the two materials as a result of the melting of the previous layers (Bandyopadhyay, Zhang, and Onuike 2022). Figures 4 and 5 reveal the microstructures of the DED processed Ti6Al4V-W7Ni3Fe bimetallic structure across each section.Specifically, Figure 4a shows the microstructure at the bulk Ti6Al4V substrate.Coarse equiaxed grains (α-Ti) and discontinuous β-Ti can be seen.Figure 4b and c are the SE and BSE-SEM images at the interface between the Ti6Al4V substrate and the W7Ni3Fe (sec. 1) section.Based on Figure 4b, acicular Widmanstätten α-Ti laths were seen near the interface region of the Ti6Al4V substrate.The formation of acicular Widmanstätten α-Ti laths was caused by reheating and the rapid cooling rate by the laser deposition and scan motion on top of the substrate (Zhang et al. 2009).In Figure 4c, the bright spherical microstructures were W composition.The spherical W particles were embedded in the Ni-Fe matrix, typically found in the LPS processed W7Ni3Fe (Gurwell et al. 1984;Srikanth and Upadhyaya 1984;Şahin 2014;Zhou et al. 2021).Figure 4b and c show that no visible defects occurred at this interface.Figure 4d illustrates the microstructure at the (W7Ni3Fe) sec. 1 section.A significantly higher volume fraction of W composition over the Ni-Fe composition was found in this section.Compared to Figure 4c, the W particles in Figure 4d coalesced and formed large W grains.Since the DED processing involves rapid solidification from the molten phase, the duration of a solution-reprecipitation in the W composition was extremely short.The W particle morphology results were smaller and less spheroidal than the W particle processed by conventional LPS methods due to the breaking of the agglomerated particles (Zhou et al. 2021).Figure 5a and b show the BSE-SEM images at the interface between the (W7Ni3Fe) sec. 1 section and (Ti6Al4V) sec. 1 section.Micropores were found in this region, suggesting further optimisation of the processing parameters; however, some studies have developed efficient methods of optimising the parameters to obtain the lowest possible porosity (Rao et al. 2022).Although the DED system used in this study has two powder feeders, both powder feeders share the same powder feeding line.The W7Ni3Fe in the (Ti6Al4V) sec. 1 likely came from W7Ni3Fe particles that remained in the powder feeding line.No micro cracks could be found at this interface.Figure 5c and d illustrate the morphology at the interface between the (Ti6Al4V) sec. 1 section and the (W7Ni3Fe) sec. 2 section.Micropores were again seen in the (W7Ni3Fe) sec. 2 sections near the interface.Figure 5d is a magnified BSE-SEM image of the selected area from Figure 5c.The spherical W particles were embedded in the polygon-shaped grains of the Ti6Al4V matrix.Figure 5e shows the microstructure at the (W7Ni3Fe) sec. 2 section, similar to the morphology found in the (W7Ni3Fe) sec. 1 section (Figure 4d). Figure 5f demonstrates the microstructure at the (Ti6Al4V) sec. 2 sections, similar to the microstructure shown in Figure 5d. The evolution of the W7Ni3Fe microstructure during the DED processing can be concluded in three stages: (a) rearrangement stage, (b) solution-reprecipitation, and (c) solid-state.The Ni-Fe and Ti6Al4V compositions are melted in the rearrangement stage due to the laser energy input, forming a small molten pool with W particles.The W particles are rearranged under the capillary force as in the DED process, and other forces, such as Marangoni force, buoyancy force, and the impact force from the powder carrier gas (Zhang et al. 2009;DebRoy et al. 2018).After the laser moves away, the dissolved W begins to reprecipitate at the solution-reprecipitation stage.The W can be reprecipitated either on residual W particles or forming individual nuclei in the liquid.Due to the rapid cooling rate of the DED process, the reprecipitated W grains could grow into irregular or dendritic shapes.Due to the rapid movement of the solidification front, the Ti6Al4V and Ni-Fe matrices grow into polygon shape grains.The powder carrier gas delivering the powders gathers and escapes from the molten pool.The powder carrier gas is trapped in the solidified molten pool during the solidstate stage due to the rapid solidification, resulting in micropore formation (Zhang et al. 2009). Figure 6 shows the EDS mapping of the Ti and W elements at the interfaces of the DED processed Ti6Al4V/W7Ni3Fe alloy bimetallic layered structure.Specifically, Figure 6a shows the Ti and W elemental distributions at the interface between the Ti6Al4V substrate and the (W7Ni3Fe) sec. 1 section.Figure 6b-d illustrate the EDS mapping at the interfaces between the (W7Ni3Fe) x and (Ti6Al4V) y sections.The remaining W particles could be seen in the (Ti6Al4V) sections.Moreover, the diffusion behaviour of the Ti/W elements at the interface was found.The mechanism of W diffusing into Ti was mainly due to the density difference between W and Ti, which causes W to sink into the Ti region (Zhang and Bandyopadhyay 2021).Furthermore, Ti diffuses into W through the defects of W, such as at dislocations and grain boundaries (Wang et al. 2021).The phase analysis was performed on the DED processed Ti6Al4V, W7Ni3Fe, Ti6Al4V/W7Ni3Fe bimetallic materials, shown in Figure 7.Only Ti and W phases were detected in the DED processed Ti6Al4V-W7Ni3Fe bimetallic structures using XRD. Figure 8a shows the hardness profiles across all the sections of the DED processed Ti6Al4V-W7Ni3Fe bimetallic layered structures.The Ti6Al4V substrate had a hardness of 315.2 ± 11.6 HV 0.2 .The microhardness value increased to 412.3 ± 43.6 HV 0.2 near the interface between the Ti6Al4V substrate and (W7Ni3Fe) sec. 1 section.The increase in hardness was caused due to microstructural refinement.The hardness value of (W7Ni3Fe) sec. 1 near the interface between the Ti6Al4V substrate and (W7Ni3Fe) sec. 1 was 596.7 ± 34.6 HV 0.2 , and then decreased to 528.2 ± 27.5 HV 0.2 . According to Figure 4c, spherical W particles were embedded in the Ni-Fe matrix at the first couple of deposited layers.The embedded W particles served as particle reinforcement, leading to higher hardness, but due to the hardness being higher than W7Ni3Fe, it suggests the formation of intermetallics despite the phases not showing up in the XRD analysis.These intermetallics are likely Ni-Ti compounds, as Ni and Ti are known to react with each other, especially in the case of bimetallic structures (Afrouzian et al. 2022;Farzaneh et al. 2022).In Figure 4d, the W particles coalesced and formed large W grains.Based on the Hall-Petch relation, the grain size and hardness have a reverse proportional relation, which caused the drop in hardness.The microhardness value of the (W7Ni3Fe) sec. 1 section near the interface between (W7Ni3Fe) sec. 1 and (Ti6Al4V) sec. 1 was 517.5 7 ± 51.5 HV 0.2 , which had a similar hardness value to the previously measured region.The hardness value of the (Ti6Al4V) sec. 1 section at the interface between (W7Ni3Fe) sec. 1 and (Ti6Al4V) sec. 1 section was 544.5 ± 22.8 HV 0.2 .This hardness value was higher compared to the hardness value of the Ti6Al4V substrate.Figure 5a and b show that the remaining W particles were embedded into the Ti6Al4V matrix.The increased hardness was caused due to W particle reinforcement and the intermetallic phase formations.The hardness further decreased to 475.4 ± 6.9 HV 0.2 , caused by less W particle reinforcement.The hardness of the (Ti6Al4V) sec. 1 section at the interface between the (Ti6Al4V) sec. 1 and (W7Ni3Fe) sec. 2 increased to 557.5 ± 30.0 HV 0.2 .The diffusion between the Ti and W near this region increased the hardness.The hardness of the (W7Ni3Fe) sec. 2 section followed the same trend as the hardness in the (W7Ni3Fe) sec. 1 section.Specifically, the hardness in the (W7Ni3Fe) sec. 2 section near the interface between the (Ti6Al4V) sec. 1 and (W7Ni3Fe) sec. 2 was 615.1 ± 41.7 HV 0.2 , and then the hardness dropped to 548.9 ± 20.7 HV 0.2 .The higher hardness near the interface region was caused by W/Ti diffusion and fine W particle reinforcement.The decrease in hardness resulted from the large grain formation (Figure 5e).The hardness of the (W7Ni3Fe) sec. 2 near the interface between the (W7Ni3Fe) sec. 2 and (Ti6Al4V) sec. 2 was 563.6 ± 37.8 HV 0.2 .The hardness of the (Ti6Al4V) sec. 2 section also had a similar trend to the hardness of the (Ti6Al4V) sec. 1 section.The hardness of the (Ti6Al4V) sec. 2 near the interface between the (W7Ni3Fe) sec. 2 and (Ti6Al4V) sec. 2 was 546.1 ± 18.6 HV 0.2 , and then the hardness decreased to 504.3 ± 23.3 HV 0.2 . Figure 8b shows the results of the thermal diffusivity measurements plotted as a function of temperature.The DED processed Ti6Al4V had the lowest thermal diffusivity values, ranging from 2.881 ± 0.002 mm 2 /s to 3.702 ± 0.014 mm 2 /s from 25 to 300°C.The DED processed pure W7Ni3Fe had the highest thermal diffusivity values, ranging from 18.853 ± 0.063 mm 2 /s to 20.258 ± 0.066 mm 2 /s from 25 to 300°C.The DED processed Ti6Al4V-W7Ni3Fe bimetallic layered structure showed thermal diffusivity values between the two base materials, ranging from 9.47 ± 0.015 mm 2 /s to 11.272 ± 0.007 mm 2 /s from 25 to 300°C.Based on the results, the thermal diffusivity of the Ti6Al4V-W7Ni3Fe bimetallic structure was approximately 3 times higher than the DED processed Ti6Al4V at 300°C.Additionally, the Ti6Al4V-W7Ni3Fe bimetallic structure showed a thermal diffusivity of about half of the DED processed pure W7Ni3Fe at 300°C. Figure 9 illustrates the results of the compression tests of Ti6Al4V-W7Ni3Fe bimetallic structures along the transverse and longitudinal directions, and the base Ti6Al4V and W7Ni3Fe alloys.The base Ti6Al4V samples performed the best out of all the samples in terms of failure strain, with the yield strength (884 ± 4.5 MPa) being very similar to the longitudinal bimetallic samples' yield strength.For the W7Ni3Fe samples, the failure strain was ∼60% compared to Ti6Al4V and 64% (568 ± 60.7 MPa) yield strength.For the longitudinal samples, the formation of brittle intermetallics at the interfaces caused the failure strain to be only about 37% of the failure strain of Ti6Al4V, and the yield strength was about the same as Ti6Al4V.For the transverse samples, the failure strain was about 63% relative to Ti6Al4V, meaning that the failure strain fell between Ti6Al4V and W7Ni3Fe.Due to the intermetallic phase formation at the interfaces, the compression behaviour was unique, with multiples stages experienced during the deformation of Ti6Al4V-W7Ni3Fe bimetallic structures in the transverse direction (Figure 10a-c).The first stage was the compression of the W7Ni3Fe section.In this stage, cracks were initiated and propagated in the horizontal direction of the W7Ni3Fe section due to cracks forming in the ductile matrix.Figure 11a and b show the fractography of the transverse sample.After the compression test, brittle fracture and cleavage could be observed in the W7Ni3Fe region.The second stage was the compression of the Ti6Al4V.In this stage, micropores in the Ti6Al4V sections were compressed.Vertical cracks formed and penetrated the interfaces between the Ti6Al4V and W7Ni3Fe (Figure 11a).These multi-stage compression behaviours could also be observed in the stress-strain curve of the transverse Conclusions This study aimed to fabricate Ti6Al4V-W7Ni3Fe bimetallic, layered structures via the DED-based metal AM.Bimetallic structures with multiple designs were fabricated with no visible defects at the interfaces between the Ti6Al4V and the W7Ni3Fe sections.Microstructural characterisation revealed unique morphology in each section.Specifically, coarse equiaxed grains were found at the Ti6Al4V substrate with discontinuous β-Ti.The microstructure of the W7Ni3Fe section near the interface between the Ti6Al4V substrate and the W7Ni3Fe section showed that spherical W particles were embedded in the Ni-Fe matrix.The microstructure of the W7Ni3Fe changed to large W grains as the deposition proceeded.The increased volume fraction of W composition was caused by the evaporation of Ni and Fe during the laser processing.The remaining W7Ni3Fe were found in the Ti6Al4V sections.The microstructure of the Ti6Al4V sections showed W particles embedded into the Ti6Al4V polygon matrix.The EDS mappings at the interfaces of the DED processed Ti6Al4V-W7Ni3Fe structure demonstrated the Ti/W diffusion.The XRD results showed no intermetallic phases formed in the DED processed Ti6Al4V-W7Ni3Fe bimetallic materials.According to the hardness measurements, the hardness value of the (W7Ni3Fe) sec. 2 section near the interface between the (Ti6Al4V) sec. 1 and the (W7Ni3Fe) sec. 2 showed the highest, 615.1 ± 41.7 HV 0.2 .The high hardness value in this region was caused by both Ti/W diffusion and W particle reinforcement and intermetallic phase formation.Based on the thermal diffusivity measurements, the DED processed Ti6Al4V-W7Ni3Fe structure showed three times higher thermal diffusivity than the DED processed Ti6Al4V at 300°C; and was about half of the thermal diffusivity value compared to the DED processed pure W7Ni3Fe at 300°C.The compression results showed multiple-stage deformation in the transverse sample and single-stage behaviour in the longitudinal sample.Additionally, both transverse and longitudinal fractography samples resulted in brittle cleavage at the W7Ni3Fe section.Our results show that the DED process can be used to manufacture layered bimetallic Ti6Al4V-W7Ni3Fe alloy structures with unique and tailorable mechanical and thermal properties.Innovation (JCATI, Seattle, WA) grant in collaboration with the Boeing Company (Seattle, WA).The authors would also like to acknowledge financial support from JCDREAM (Seattle, WA) towards purchasing metal additive manufacturing facilities at WSU.The authors would also like to acknowledge Tungsten Parts Wyoming, Inc., for providing the tungsten powders used in this study. Notes on contributors Yanning Zhang received his BS in Materials Science and Engineering, MS in Mechanical Engineering, and Ph.D. in Mechanical Engineering (2021) from Washington State University (WSU).During his graduate studies at WSU, he worked on bimetallic structures using directed energy deposition-based metal additive manufacturing.He has published ten journal papers and is the inventor of one issued US patent.His work has been cited over 330 times, and the current "h" index is 8. Cory Groden is a MS candidate in the School of Mechanical Engineering at Washington State University.His research focuses mainly on the additive manufacturing of metals and bimetallic system using Directed Energy Deposition-based additive manufacturing, and the creation of mechanically efficient lattice structures using Powder Bed Fusion-based additive manufacturing.times, and the current "h" index is 91 (Google Scholar as of October 20, 2022).He is a Fellow of the Society of Manufacturing Engineers, American Ceramic Society, ASM International, American Institute for Medical and Biological Engineering, American Association of Advancement of Science, National Academy of Inventors, and an elected member at the Washington State Academy of Science. Figure 1 . Figure 1.Morphology of the W7Ni3Fe Powder.(Top) Low magnification image of the agglomerated W7Ni3Fe powders.(Bottom) High magnification image of agglomerated W7Ni3Fe particles. Figure 2 . Figure 2. (a) Design of the DED processed Ti6Al4V/W7Ni3Fe alloy bimetallic layered structure.(b) Demonstration of the laser scanning orientation (0°, 90°).(c) Schematic of the laser fabrication process of the Ti6Al4V/W7Ni3Fe alloy layered structure by the DED method. Figure 8 . Figure 8.(a) Hardness profiles of the DED processed Ti6Al4V/W7Ni3Fe alloy bimetallic, layered structures.(b) Thermal diffusivity as a function of temperature for the DED processed pure W7Ni3Fe alloy, Ti6Al4V/W7Ni3Fe bimetallic layered structure, and the DED processed pure Ti6Al4V. Figure 9 . Figure 9. Stress (in MPa) and strain (in mm/mm) plots from the compression tests of the DED processed bimetallic Ti6Al4V-W7Ni3Fe structures in both transverse and longitudinal directions along with pure Ti6Al4V and W7Ni3Fe samples. Figure 10 . Figure 10.The DED processed Ti6Al4V-W7Ni3Fe bimetallic layered specimens for the compression tests (a) transverse sample (before compression testing); (b) transverse sample (during compression testing); (c) transverse sample (after compression testing); (d) longitudinal sample (before compression testing); (e) longitudinal sample (during compression testing); (f) longitudinal sample (after compression testing).The red lines denote the visible interfaces on the samples. Figure 11 . Figure 11.Fractography of the DED processed Ti6Al4V-W7Ni3Fe bimetallic, layered specimen -(a) transverse sample; (b) Zoomed image of the selected region from (a).(c) longitudinal sample; (d) Zoomed image of the selected region from (c). E . Nyberg is a Metallurgist and Product Engineer at Kaiser Aluminum.Prior to this Mr. Nyberg was the Technology Development Manager at Tungsten Parts Wyoming, overseeing processing and materials development and the Quality Management System.Previously he was the Director of Programs at Brunel University, London, for the Brunel Centre for Advanced Solidification Technology (BCAST), leading the development of research programs with international partners.For 24 years Mr. Nyberg held a variety of roles at the Pacific Northwest National Laboratory (PNNL), most recently as Chief Engineer involving materials research and development.Mr. Nyberg received his M.S. and B.S. degrees in Materials Science and Engineering from Washington State University. A. Bandyopadhyay is a Boeing Distinguished Professor in the School of Mechanical and Materials Engineering at Washington State University.He has worked with 22 Ph.D. and 30 MS students, an inventor of 21 issued patents, and published over 365 technical articles.His work has been cited over 30,000 Table 1 . Processing parameters of the DED processed Ti6Al4V/W7Ni3Fe alloy bimetallic layered structure.
7,110.6
2022-11-07T00:00:00.000
[ "Materials Science" ]
The Interplay of Notch Signaling and STAT3 in TLR-Activated Human Primary Monocytes The highly conserved Notch signaling pathway essentially participates in immunity through regulation of developmental processes and immune cell activity. In the adaptive immune system, the impact of the Notch cascade in T and B differentiation is well studied. In contrast, the function, and regulation of Notch signaling in the myeloid lineage during infection is poorly understood. Here we show that TLR signaling, triggered through LPS stimulation or in vitro infection with various Gram-negative and -positive bacteria, stimulates Notch receptor ligand Delta-like 1 (DLL1) expression and Notch signaling in human blood-derived monocytes. TLR activation induces DLL1 indirectly, through stimulated cytokine expression and autocrine cytokine receptor-mediated signal transducer and activator of transcription 3 (STAT3). Furthermore, we reveal a positive feedback loop between Notch signaling and Janus kinase (JAK)/STAT3 pathway during in vitro infection that involves Notch-boosted IL-6. Inhibition of Notch signaling by γ-secretase inhibitor DAPT impairs TLR4-stimulated accumulation of NF-κB subunits p65 in the nucleus and subsequently reduces LPS- and infection-mediated IL-6 production. The reduced IL-6 release correlates with a diminished STAT3 phosphorylation and reduced expression of STAT3-dependent target gene programmed death-ligand 1 (PD-L1). Corroborating recombinant soluble DLL1 and Notch activator oxaliplatin stimulate STAT3 phosphorylation and expression of immune-suppressive PD-L1. Therefore we propose a bidirectional interaction between Notch signaling and STAT3 that stabilizes activation of the transcription factor and supports STAT3-dependent remodeling of myeloid cells toward an immuno-suppressive phenotype. In summary, the study provides new insights into the complex network of Notch regulation in myeloid cells during in vitro infection. Moreover, it points to a participation of Notch in stabilizing TLR-mediated STAT3 activation and STAT3-mediated modulation of myeloid functional phenotype through induction of immune-suppressive PD-L1. The highly conserved Notch signaling pathway essentially participates in immunity through regulation of developmental processes and immune cell activity. In the adaptive immune system, the impact of the Notch cascade in T and B differentiation is well studied. In contrast, the function, and regulation of Notch signaling in the myeloid lineage during infection is poorly understood. Here we show that TLR signaling, triggered through LPS stimulation or in vitro infection with various Gram-negative and -positive bacteria, stimulates Notch receptor ligand Delta-like 1 (DLL1) expression and Notch signaling in human blood-derived monocytes. TLR activation induces DLL1 indirectly, through stimulated cytokine expression and autocrine cytokine receptor-mediated signal transducer and activator of transcription 3 (STAT3). Furthermore, we reveal a positive feedback loop between Notch signaling and Janus kinase (JAK)/STAT3 pathway during in vitro infection that involves Notch-boosted IL-6. Inhibition of Notch signaling by γ-secretase inhibitor DAPT impairs TLR4-stimulated accumulation of NF-κB subunits p65 in the nucleus and subsequently reduces LPS-and infection-mediated IL-6 production. The reduced IL-6 release correlates with a diminished STAT3 phosphorylation and reduced expression of STAT3-dependent target gene programmed death-ligand 1 (PD-L1). Corroborating recombinant soluble DLL1 and Notch activator oxaliplatin stimulate STAT3 phosphorylation and expression of immune-suppressive PD-L1. Therefore we propose a bidirectional interaction between Notch signaling and STAT3 that stabilizes activation of the transcription factor and supports STAT3-dependent remodeling of myeloid cells toward an immuno-suppressive phenotype. In summary, the study provides new insights into the complex network of Notch regulation in myeloid cells during in vitro infection. Moreover, it points to a participation of Notch in stabilizing TLR-mediated STAT3 activation and STAT3-mediated modulation of myeloid functional phenotype through induction of immune-suppressive PD-L1. INTRODUCTION The highly conserved Notch signaling pathway has virtually a simple composition and depends on Notch ligands binding to Notch receptors on neighboring cells. In mammals, there are four Notch receptors (Notch1-4) and five Notch ligands [Jagged-1, Jagged-2, Delta-like 1 (DLL1), DLL3, and DLL4] (Radtke et al., 2010). Both, Notch receptor and ligands are transmembrane proteins with large extracellular domains. Upon ligand binding, a conformational change in the receptor enables two consecutive proteolytic processing events. The first cleavage, that results in shedding of the extracellular domain, is mediated by the ADAM (disintegrin and metalloproteases)-family. The following cleavage inside the transmembrane domain is catalyzed by a multicomponent γ-secretase complex that releases Notch intracellular domain (NICD) to translocate to the nucleus. In the nucleus, the NICD interacts with the DNA-binding protein CSL (also termed RBP-J) that unbound probably functions as transcriptional repressor. Binding of NICD displaces co-repressor complexes, recruits co-activators (mastermind proteins), and finally induces transcription of Notch target genes (Kopan and Ilagan, 2009;Kovall and Blacklow, 2010;Bray, 2016). In mammals, the most common Notch target genes are members of the basic-helix-loop-helix transcription factors belonging to the hairy and enhancer of split (HES) and HES with YRPW motif (HEY) families (Iso et al., 2003). Similar to their receptors, Notch ligands also undergo ADAM-and γsecretase-mediated cleavage upon receptor-binding (Zolkiewska, 2008). The induction of Notch target genes contributes to a wide array of developmental processes in different organ systems. Notch engages a central role in the hematopoietic as well as the immune system through regulating multiple lineage decisions of developing immune cells (Radtke et al., 2010). Particularly in the adaptive immune system, the impact of the Notch cascade in T and B differentiation is well accepted (Radtke et al., 2004). For a long time, myeloid cells such as monocytes, macrophages and dendritic cells (DCs) were mostly considered as signal-sending cells that express Notch ligands and activate the cascade in receptor expressing, signal-receiving lymphocytes. Nevertheless, also myeloid cells do express Notch receptors and the influence of Notch on myeloid cell differentiation becomes more and more apparent (Radtke et al., 2010). Beside its role in differentiation, increasing evidence suggests an association of Notch signaling in mature immune cell activation and function during viral and bacterial infection (Shang et al., 2016). During infection myeloid cells recognize pathogen-associated molecular patterns (PAMPs) via a variety of pattern recognition receptors (PRRs) including Tolllike receptors (TLRs). In terms of Gram-negative bacteria, the cell wall component lipopolysaccharide (LPS) activates TLR4 that mediates expression of inflammatory cytokines through MAPKinase and Nuclear factor κB (NF-κB) signaling. Subsequently, the released cytokines bind the corresponding receptors in an autocrine and paracrine fashion, activate signaling cascades such as the JAK/STAT pathway and thereby modulate the further direction of the immune response. As an inflammatory environment is associated with Notch signaling a modulation of the cascade through TLR-mediated signaling seems likely but is far from being elucidated. Activated TLR4-signaling might modulate the Notch cascade directly through histones modifications at the Notch target gene loci (Hu et al., 2008) and indirectly through induction of Notch receptors and ligands (Monsalve et al., 2006;Palaga et al., 2008). On the other hand, activated Notch signaling might modulate TLR signaling. However, whether the modulation is supportive or inhibitory is controversially discussed (Monsalve et al., 2006(Monsalve et al., , 2009Zhang et al., 2012). In this study, we set out to investigate the regulation of Notch signaling in TLR-activated primary monocytes and clarify the interaction between both cascades. Our data show that LPS-stimulation and in vitro infection with Gramnegative and Gram-positive bacteria stimulates expression of Notch receptor ligand DLL1 and induction of Notch target genes in primary human monocytes. Furthermore, DLL1 is strongly upregulated in LPS-stimulated systemic inflammation in mice. The TLR4-stimulated production of DLL1 seems to be an indirect effect, provoked by TLR-stimulated cytokines and the subsequent activation of the transcription factor STAT3. Our data support the hypothesis that Notch signaling increases TLR4-stimulated inflammatory responses of myeloid cells as inhibiting NICD through γ-secretase inhibitor (GSI) DAPT impaired IL-6 expression of activated monocytes. Thereby infection stimulated activation would provoke a TLRsignaling primed positive feedback loop between the Notch cascade and STAT3 that sustains the activity of the key transcription factor. Finally, we propose that Notch-mediated stabilization of STAT3 activity participates in remodeling the functional phenotype of monocytes through facilitating STAT3-dependent target genes, such as immuno-suppressive PD-L1. TLR-Activation Stimulates Notch Ligand DLL1 Expression in Human Primary Monocytes and Mice To investigate the regulation of Notch signaling in TLRactivated myeloid cells, primary monocytes isolated from blood of healthy donors were infected with Gram-positive Enterococcus faecalis or different Gram-negative bacteria (GN), namely Escherichia coli, Klebsiella pneumonia, and Pseudomonas aeruginosa. Furthermore, cells were stimulated with LPS, the main component of GN outer membrane and virulence factor that activates TLR4 signaling (Chow et al., 1999). Bacteria were killed by gentamicin 2 h after infection. The next day supernatant and infected/LPS-treated cells were analyzed. Initially, we checked the induction of Delta-like (DLL) and Jagged Notch ligands after TLR activation. Although the regulation of the Notch cascade is complex, induction of Notch ligands is one essential part of the control network and therefore important to determine. The qRT PCR data in Figure 1A show that Delta-like and Jagged ligands seem to FIGURE 1 | TLR signaling induces DLL1 in primary human monocytes. CD14 + monocytes, isolated from blood of healthy donors were stimulated with 100 ng/ml LPS or infected with Escherichia (E.) coli, Enterococcus (E.) faecalis, Klesiella (K.) pneumoniae, Pseudomonas (P.) aeruginosa in a concentration of 10 6 bacteria per 10 6 monocytes/ml. After 2 h bacteria were killed by gentamicin. The next day cells and supernatant were analyzed. (A) RNA was isolated and cDNA produced. Induction of gene expression was analyzed by qRT PCR using sequence-specific primer for DLL1 and JAG1 (gene encoding Jagged-1) and SYBR Green Master mix. Actin was detected as endogenous control for normalization. (B) For western blot analysis equal amounts of protein lysates were blotted and probed with antibodies against DLL1 or GAPDH (loading control). Shown is one representative blot and the associated quantification. Quantification was performed using the Image Analysis System Bioprofil (Fröbel, Germany). The intensity of signals were calculated against loading control and presented as percent of untreated samples. (C) Supernatants were used for ELISA analysis to quantify shedded DLL1 extracellular domain. (A,C) depict the mean and standard deviation of at least three donors. Statistics: *p ≤ 0.05 by Mann-Whitney U-test. be differentially regulated. In comparison to DLL1, the basal expression of JAG1 (gene encoding Jagged-1) is elevated in untreated monocytes. Twenty-four hours after TLR activation through in vitro infection or LPS stimulation, the expression of JAG1 decreases, whereas DLL1 gene expression is highly induced ( Figure 1A) and translated into the protein DLL1 ( Figure 1B). Additionally performed ELISA analyses confirmed shedding of the DLL1 extracellular domain into the culture supernatant ( Figure 1C). The release of cleaved DLL1 occurs upon Notch receptor binding. Therefore one can conclude binding of the ligand to the respective DLL1 receptor. Figure 2A confirms this presumption and reveals a significant induction of Notch signaling target genes HES and HEY after TLR activation. Western blot analyses ( Figure 2B) and the associated quantifications further show that HES and HEY mRNAs are translated into the respective proteins. In order to verify, whether TLR signaling stimulates DLL1 expression and shedding in an in vivo situation, we induced systemic inflammation in mice by injecting LPS (endotoxin mouse model). In our model, 12 week old male C57BL/6 mice were injected intraperitoneally with LPS or NaCl (control group) (n = 16 each group). After 24 h, blood was taken and analyzed for DLL1 by ELISA. Figure 3 compares the DLL1 plasma level of LPS-injected and control mice and reveals a pronounced and significant higher concentration of the Notch ligand in mice with systemic inflammation. TLR-Signaling Promotes DLL1 Expression Indirectly Through Cytokine Receptor-Triggered STAT3 Activation As we had observed a pronounced TLR ligand-stimulated expression of DLL1 and Notch target genes, we aimed to discover the underlying mechanism. Whereas several publications show that TLR ligands can induce Notch components, the actual signaling behind that induction has not been clarified yet. For TLR4-induced DLL1 expression an indirect, not further specified mechanism was suggested (Foldi et al., 2010). Therefore we screened for predicted transcription factor binding sites in the DLL1 promotor region and identified STAT3 as a promising candidate (www.genecards.org/cgibin/carddisp.pl?gene$=$DLL1). The putative STAT3 binding sites in the promoter region of DLL1 are depicted in Supplementary Figure 1. The western blot in Figure 4A and the associated quantification ( Figure 4B) show that STAT3 is highly phosphorylated in LPS-stimulated as well as in vitro infected monocytes. Pretreatment of cells with the STAT3 specific inhibitor JSI-124 for 2 h effectively suppressed the activation of STAT3 and the diminished activation correlates with a reduction of DLL1 expression in the activated monocytes (Figures 4A,B). Also, levels of shedded DLL1 in the supernatant were significantly reduced by inhibiting STAT3 in cells stimulated by different bacteria (Figure 4C). The expression of JAG1 was not affected by the inhibitor (Supplementary Figure 2). Till this point, we did not observe any significant differences within the group of Gram-negative bacteria or between Gram-negative and -positive bacteria. Therefore, we concentrated on E. coli-infection and TLR4-mediated signaling for the subsequent experiments. In line with the previous results, E. coli-stimulated Notch target genes HES and HEY were also affected by STAT3 inhibitor treatment and additional stimulation with oxaliplatin that activates Notch signaling via activation of the γ-secretase rescued HES and HEY induction ( Figure 4D). This supports the hypothesis of an indirect TLR-stimulated activation of the Notch target genes and points to STAT3-dependent DLL1 gene induction. Notch Signaling Augments TLR4-Stimulated IL-6 Expression Several publications propose a bidirectional crosstalk of TLR and Notch signaling pathways in macrophages. However, whether the Notch cascade impairs or augments TLR-mediated activation and production of pro-inflammatory cytokines is controversially discussed (Palaga et al., 2008;Monsalve et al., 2009;Zhang et al., 2012). To check an influence of Notch in LPS-and E. colistimulated cytokine production in primary human monocytes, cells were treated with the γ-secretase inhibitor (GSI) DAPT. Figure 5A confirms the abrogation of Notch signaling through DAPT as HES and HEY gene expression was suppressed by the inhibitor. The further performed ELISA analysis revealed that GSI treatment had no effect on TLR4-stimulated production of IL-12p40, slightly reduced TNFα expression but FIGURE 5 | Notch signaling augments TLR4-stimulated IL-6 and TNFα expression. Monocytes were pretreated with DAPT (2.5 µM) for 1 h before LPS stimulation and E. coli infection. 2 h after infection, medium was changed and DAPT was added again. (A) The next day RNA was isolated, cDNA produced and gene expression was analyzed by qRT PCR using sequence-specific primer for HES and Hey and SYBR Green. Results were normalized against actin. (B) Supernatants were used for ELISA analysis to quantify released IL-6, IL-12p40, and TNFα. (C) For western blot analysis cells were harvested 2 h after infection. Nuclear lysates were produced and equal amounts of lysates were blotted and probed with antibodies against p65 or B23 (control). Shown is one representative experiment out of three and the associated quantification of three experiments. (A-C) mean ± std n = 3, *p ≤ = 0.05 by Mann-Whitney U-test. TLR4-activation stimulates cytokine expression amongst other cascades through NF-κB signaling. As a Notch-mediated modulation of NF-κB was reported previously in T cells (Shin et al., 2006;Vacca et al., 2006;Vilimas et al., 2007), we set out to investigate a potential involvement of NF-κB in the DAPTmediated reduction of cytokine expression. Therefore we treated the cells once more with DAPT and stimulated TLR4 signaling for 2 h with either LPS or E. coli. The subsequently produced nuclear fraction of lysed cells we analyzed for the amount of NF-κB subunit p65 which is known to induce IL-6 gene expression. The western blot in Figure 5C and the associated quantification of results clearly reveal that inhibiting Notch signaling reduces the level of p65 in the nucleus of LPS-and E. coli-stimulated monocytes. This strongly suggests that NICD-modulated NFκB signaling accounts for the decreased IL-6 production in Notch inhibited monocytes. Notch Signaling Activates STAT3 and Mediates STAT3 Target Gene Expression According to our results, STAT3 induces DLL1 and DLL1mediated Notch signaling boosts TLRL-stimulated IL-6. As binding of IL-6 to the respective receptor mediates activation of the JAK2/STAT3 cascade a positive feedback loop between the Notch pathway and STAT3 seemed likely. To test this hypothesis, we activated Notch signaling in primary monocytes and analyzed activation of STAT3. The western blot in Figure 6A confirms that treatment with recombinant soluble DLL1 (extracellular domain) stimulates STAT3 phosphorylation and reveals that the additional boost of Notch signaling through oxaliplatin ("oxa") further slightly enhanced the activation. As additional hint for a positive feedback loop, inhibition of Notch signaling with DAPT resulted in reduced E. colistimulated phosphorylation of STAT3 ( Figure 6A). Finally, blocking IL-6 receptor signaling through an anti-IL6 blocking FIGURE 6 | Notch signaling modulates functional myeloid phenotype. Primary blood-derived monocytes were stimulated with 3 µg/ml recombinant soluble DLL1 ± 1 µM oxaliplatin (oxa). Aside from that cells were treated with DAPT (1 µM) 1 h before infection with E. coli. (A) Cell lysates were analyzed by western blots for STAT3 phosphorylation. GAPDH was detected as loading control. Results were quantified. (B) Monocytes were treated with an anti-IL6 specific antibody (Thermo Fisher Scientific) in parallel to LPS (100 ng/ml) stimulation. The next day lysates were produced and used in western blot analyses for detection of p(y701)STAT3 and DLL1. Loading control: GAPDH. Experiment was repeated two times with comparable results. (C) After 3 days cells were analyzed with fluorescently-labeled antibodies by flow cytometry for surface expression of PD-L1. Shown are FACS histogram overlays (Weasel.jar software) and the associated quantification of three experiments. Mean ± std n = 3, *p ≤ = 0.05 by Mann-Whitney U-test. STAT3 is known to be one main regulator of the functional phenotype of myeloid cells (Cheng et al., 2003;Melillo et al., 2010;Dufait et al., 2016;Giesbrecht et al., 2017). As inducer of several T cell-inhibiting factors such as PD-L1, the transcription factor is considered as a mediator of an immuno-suppressive APC phenotype. Recently we showed that PRR-mediated pro-inflammatory cytokines activate the key transcription factor STAT3 that ultimately induces a shift in gene expression and induction of several T cell suppressive factors (Wolfle et al., 2011;Giesbrecht et al., 2017). Therefore, we addressed the question, whether Notch signaling boosted IL-6 and stabilization of STAT3 activity influences the functional phenotype of monocytes. We evaluated whether activation of Notch signaling results in expression of STAT3-dependent PD-L1. By flow cytometry analysis the surface expression of PD-L1 was quantified on GSI pretreated and E. coli-infected monocytes. Additionally, PD-L1 expression was quantified after sDLL1 ± oxaliplatin-stimulation that activates Notch signaling. The flow cytometry data (overlay and associated quantification) of Figure 6C illustrates that E. coliinfection stimulates a pronounced upregulation of PD-L1 that is significantly diminished through Notch inhibition by DAPT. Furthermore, treatment with sDLL1 stimulated upregulation of the immuno-checkpoint which further increases after boosting of Notch cascade through oxaliplatin ( Figure 6C). Figure 7 summarizes our so far presented results and depicts our proposed mechanism of TLR-stimulated DLL1 and the positive feedback interaction between Notch signaling and STAT3 that results in upregulation of immuno-suppressive PD-L1. DISCUSSION The regulation and function of Notch signaling in myeloid cells involved in innate immunity is poorly understood. Nevertheless, recent findings elucidate a role for the Notch pathway during TLR-associated and inflammation-induced monocyte differentiation and macrophage activation (Radtke et al., 2010;Nakano et al., 2016;Singla et al., 2017). Cells of the myeloid lineage do express a broad range of TLRs as well as Notch receptors and ligands. As both pathways are connected to inflammation, simultaneous activation, and a bidirectional modulation seem likely. However, the reciprocal modulation is far from being understood. In terms of TLR-mediated regulation of Notch signaling it was shown that the expression of Notch receptors and ligands on myeloid cells of both human and mouse origin can be enhanced in response to TLR ligands (Foldi et al., 2010;Radtke et al., 2010) and infection (Narayana and Balaji, 2008;Ito et al., 2009). Nevertheless, the mechanism of TLR-stimulated induction of Notch ligands remains largely elusive and direct and indirect ways are suggested. A study from Foldi et al. proposes an indirect TLR-mediated induction of Delta-like ligands that seems to be independent of Notch signaling components (Foldi et al., 2010). Our study, in human primary monocytes, confirms the indirect induction of DLL1 and extends the knowledge of the underlying mechanism. According to our experiments with a STAT3-specific inhibitor, the TLR-stimulated expression of DLL1 is dependent on the cytokine receptor-activated transcription factor. As the DLL1 promoter contains a binding site for STAT3 we assume a direct induction. A dependency of DLL1 transcription on STAT family members was reported previously in Influenza A Virus (H1N1) infection. The study in murine bone marrowderived macrophages shows that H1N1 infection stimulates the autocrine activation of INFαR-signaling, that subsequently mediates STAT1/2-controlled transcription of DLL1 (Ito et al., 2011). For bacterial infection, we propose for the first time a dependency of DLL1 transcription on STAT3 that is activated through TLR-induced cytokines and autocrine cytokine receptor signaling. In addition to TLR-stimulated Notch pathway modulation, our study highlights the positive regulation of TLR-activated NF-κB signaling and cytokine expression through Notch. In the literature, the Notch-mediated modulation of NF-κB in the myeloid system is discussed controversially. Constitutively active NICD was shown to increase TLR-associated inflammatory response in the murine macrophage cell line Raw 647 and DLL4 was reported to increase LPS-stimulated cytokines by enhancing NF-κB activation (Monsalve et al., 2006(Monsalve et al., , 2009. Contrary to these findings, other studies propose that TLR-mediated proinflammatory cytokines are reduced upon over-expression of NICD in mouse peritoneal macrophages (Zhang et al., 2012). Our study supports the hypothesis of a Notch-transduced amplification of inflammation during infection as it reveals a gain of TLR4-primed and NF-κB -mediated expression of IL-6 and TNFα through Notch activation. Furthermore, we state that Notch-boosted IL-6 and the subsequent autocrine IL-6 receptor signaling, that is known to activate JAK/STAT pathway (Heinrich et al., 2003) stabilize STAT3 activation. In our hands, treatment with recombinant soluble DLL1 stimulates STAT3 phosphorylation in monocytes and the GSI-induced decrease in IL-6 production correlates with a diminished STAT3 phosphorylation. An influence of Notch signaling on STAT3 was observed in the absence of infection in breast cancer cells (Jin et al., 2013) with hyperactivated Notch signaling. According to the data the Notch-induced increase in IL-6 results in autocrine and paracrine activation of JAK/STAT signaling and an upregulation of STAT3-target genes (Jin et al., 2013). Here we propose a positive feedback loop between STAT3 and DLL1-activated Notch signaling after TLR-mediated inflammation. In our hypothesis, TLR-mediated NF-κB signaling stimulates production of IL-6, which binds to the IL-6 receptor and activates STAT3. STAT3 induces DLL1 expression that activates Notch signaling and boosts NF-κB-induced IL-6 which transduces stabilization of STAT3 activation. STAT3 is known to be a key transcription factor in immunosuppression through the direct induction of immunosuppressive factors such as PD-L1 (Wolfle et al., 2011;Giesbrecht et al., 2017). From our previous studies it is known that PRRmediated inflammatory cytokines remodel myeloid cells toward an immune suppressive phenotype through prolonged activation of STAT3 (Wolfle et al., 2011;Giesbrecht et al., 2017). Here we propose that the positive feedback loop between Notch and STAT3 after TLR activation facilitates activation of the key transcription factor and thereby induction of PD-L1. We show that inhibition of Notch decreases TLR-stimulated PD-L1 surface expression and that recombinant soluble DLL1 increases PD-L1 on monocytes significantly. PD-L1 induces differentiation of regulatory T cells (Tregs) through binding PD-1 on T cells (Francisco et al., 2009). In terms of Treg differentiation, it was shown that DLL1 enhances the conversion of human memory CD4 T cells into Tregs and that the Notch ligand expands FOXP3 positive T cells (Mota et al., 2014). Here we hypothesize that DLL1 and Notch signaling influence adaptive immunity not only directly but also indirectly through STAT3-driven remodeling of myeloid cells toward an immunesuppressive myeloid phenotype that eventually facilitates Treg formation. In summary our study extends the knowledge of TLR4mediated regulation of Notch signaling, reveal a bidirectional interaction between Notch signaling and STAT3 and gives first hints to an involvement of Notch in the functional phenotype of monocytes. In Vitro Infection Escherichia coli (ATCC25922), Klebsiella pneumonia (ATCC700603), P. aeruginosa (PA01) and E. faecalis (ATCC29212) were cultured overnight on Columbia blood sheep agar at 37 • C at 5% CO 2 in a humidified atmosphere. The next day 1 colony of each culture was transferred into TSB (Tryptic Soy Broth) media and cultured at constant shaking at 200 rpm/37 • C until mid-log phase. Then bacterial suspension was adjusted by absorption measurement to a concentration of 10 8 /ml RPMI. 1 × 10 6 sorted CD14 + monocytes were plated in 24-well plate format in 1 ml RPMI/10% FCS. Cells were infected with 1 × 10 6 bacteria/ ml. After 2 h gentamicin (PAA Laboratories, Inc.) was added to a final concentration of 100 ng/ml. The next day cells were analyzed. Mouse Model Systemic Inflammation Twelve week old, male C57BL/6 mice were injected intraperitoneally with 1 mg/kg body weight LPS (Invivogen, San Diego, USA) or NaCl (control group). After 24 h, animals were euthanized and blood was taken and analyzed for DLL1 by ELISA assay (Abcam, Cambridge, UK). Flow Cytometry Monocytes were stimulated with 3 µg/ml recombinant soluble DLL1 (PeproTech, Hamburg, Germany) ±1 µM Oxaliplatin for 3 days. Additionally E. coli infected monocytes were treated with 2.5 µM DAPT on hour before infection. After killing bacteria by gentamicin, medium was changed and DAPT was added, again. Three days later monocytes were analyzed for surface expression of PD-L1 with antibody staining: α-PD-L1 (BD Biosciences, Heidelberg, Germany). Mean fluorescence was recorded using the FACS DIVA V 4.12 software on a FACS Canto (BD Biosciences). Overlays were performed with the Weasel v2.5 software (WEHI, Melbourne, VIC, Australia). Quantification Quantification of immunoblots was performed using the Image Analysis System Bioprofil (Fröbel, Germany) Bio ID software version 12.06. The intensity of signals were calculated against loading control and presented as percent of untreated samples. Statistical Analysis The comparison of two data groups were analyzed by Mann-Whitney U test with * p ≤ 0.05. Ethics Statement This study (taking of blood samples from healthy donors and treatment of blood leukocytes with microbial stimuli) was carried out in accordance with the recommendations of the ethics committee of the Medizinische Fakultät Heidelberg with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The animal experiments were approved by the governmental animal ethics committee (Regierungspraesidium Karlsruhe, file number: 35-9185.81/G-132/15) and conducted according to international FELASA recommendations. The study was reviewed and approved by the ethics committee of Medizinische Fakultät Heidelberg. AUTHOR CONTRIBUTIONS KH and DH designed the study with essential contribution from FU and MAW. DS, DH and UK performed the experiments. KH, DH, and FU prepared the manuscript. All authors discussed the results and implications and approved the manuscript. ACKNOWLEDGMENTS We acknowledge financial support by Deutsche Forschungsgemeinschaft within the funding programme Open Access Publishing, by the Baden-Württemberg Ministry of Science, Research and the Arts and by Ruprecht-Karls-Universität Heidelberg. Furthermore we thank Dennis Nurjadi and Sabrina Klein for the helpful discussion of experimental design.
6,216
2018-07-10T00:00:00.000
[ "Biology", "Medicine" ]
Colonization of long-term care facility residents in three Italian Provinces by multidrug-resistant bacteria Rationale and aims of the study were to compare colonization frequencies with MDR bacteria isolated from LTCF residents in three different Northern Italian regions, to investigate risk factors for colonization and the genotypic characteristics of isolates. The screening included Enterobacteriaceae expressing extended-spectrum β-lactamases (ESβLs) and high-level AmpC cephalosporinases, carbapenemase-producing Enterobacteriaceae, Pseudomonas aeruginosa or Acinetobacter baumannii, methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococci (VRE). Urine samples and rectal, inguinal, oropharyngeal and nasal swabs were plated on selective agar; resistance genes were sought by PCR and sequencing. Demographic and clinical data were collected. Among the LTCF residents, 75.0% (78/104), 69.4% (84/121) and 66.1% (76/115) were colonized with at least one of the target organisms in LTCFs located in Milan, Piacenza and Bolzano, respectively. ESβL producers (60.5, 66.1 and 53.0%) were highly predominant, mainly belonging to Escherichia coli expressing CTX-M group-1 enzymes. Carbapenemase-producing enterobacteria were found in 7.6, 0.0 and 1.6% of residents; carbapemenase-producing P. aeruginosa and A. baumannii were also detected. Colonization by MRSA (24.0, 5.7 and 14.8%) and VRE (20.2, 0.8 and 0.8%) was highly variable. Several risk factors for colonization by ESβL-producing Enterobacteriaceae and MRSA were found and compared among LTCFs in the three Provinces. Colonization differences among the enrolled LTCFs can be partially explained by variation in risk factors, resident populations and staff/resident ratios, applied hygiene measures and especially the local antibiotic resistance epidemiology. The widespread diffusion of MDR bacteria in LTCFs within three Italian Provinces confirms that LTCFs are an important reservoir of MDR organisms in Italy and suggests that future efforts should focus on MDR screening, improved implementation of infection control strategies and antibiotic stewardship programs targeting the complex aspects of LTCFs. Background Life expectancy in Italy is rapidly increasing, with present values of 80.1 years for males and 84.7 for females [1]. Due to the ageing population, long-term care facilities (LTCFs), which provide ongoing skilled nursing care to residents and help meet both the medical and non-medical needs of elderly individuals with a chronic illness or disability, play an important role in the Italian healthcare system. Residents in LTCFs have a variety of risk factors for colonization with multidrug-resistant (MDR) bacteria; therefore, these facilities represent reservoirs of: i) Enterobacteriaceae expressing extended-spectrum β-lactamases (ESβLs), derepressed/acquired high-level AmpC cephalosporinases or carbapenemases, ii) Pseudomonas aeruginosa or Acinetobacter baumannii producing carbapenemases and iii) methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococci (VRE) [2][3][4]. To promote detailed studies of various microbiological aspects related to LTCFs in Italy, the Association of Italian Clinical Microbiologists (Associazione Microbiologi Clinici Italiani; AMCLI) in 2016 has set up a new working group consisting of Clinical Microbiologists (Gruppo di Lavoro per lo Studio delle Infezioni nelle Residenze Sanitarie Assistite e Strutture assimilabili; GLISTer); one of the main objectives of this working group is the study of the distribution and prevalence of MDR organisms in Italian LTCFs and therefore a multicenter point-prevalence survey, including the main MDR bacteria as described above, was performed in 2016 on residents of LTCFs, located in three Northern Italian cities. The aim Rationale and aims of the study were to compare colonization frequencies with MDR bacteria of LTCF residents in three different Northern Italian cities, located in different Italian regions, and to investigate their genotypic characteristics. Moreover, risk factors for colonization were compared between LTCFs and colonization prevalence was correlated with the local epidemiology of invasive MDR isolates. Facilities, patient characteristics and survey design In October-November 2016, a multicenter pointprevalence screening study was conducted in four LTCFs concerning i) Enterobacteriaceae with ESβLs, carbapenemases or high-level AmpCs, ii) P. aeruginosa or A. baumannii with carbapenemases, iii) MRSA and VRE. The four facilities, located in the Northern Italian Provinces of Milan (n = 1), Piacenza (n = 2) and Bolzano (n = 1), offer high skilled 24 h nursing care. Although the overall study was performed over a period of 2 months, the sampling interval in each facility lasted for a maximum of 1 week. All residents of the four LTCFs were eligible to participate, and the study was approved by the Ethics Committees of the three referring hospitals; informed written consent was obtained from the residents or, if they were unable to consent, from their relatives. Microbiological methods Sample processing, microbial identification and antibiotic susceptibility testing were carried out in the clinical microbiology laboratories of the referral hospitals. Microbiological methods for the LTCF screening study in Bolzano were previously described [5]. Similar methods were used in the epidemiological studies of Milan and Piacenza LTCFs, with minor modifications. For the screening of MDR bacteria from LTCF residents in Milan midstream or catheter urine samples were cultured on Oxoid Brilliance™ ESβL plates (Thermo Scientific, UK), applying a 10 μg imipenem (IMP) disc (Oxoid, Thermo Scientific, UK), and on Oxoid Brilliance™ VRE (Thermo Scientific, UK). Inguinal, oropharyngeal and rectal swabs were seeded on Oxoid Brilliance™ ESβL, applying a 10 μg IMP disc, on Oxoid Brilliance™ VRE and on CHROMagar™ MRSA (BD Diagnostics, MD). Nasal swabs were plated on CHROMagar™ MRSA. All plates were incubated at 35 ± 2°C under aerobic conditions for 24-48 h. Isolate identification and antibiotic susceptibility testing were performed by the BD Phoenix™ System (BD Diagnostics, MD), according to European Committee on Antimicrobial Susceptibility Testing (EUCAST) criteria [6], using PHOENIX NMIC/ ID402 for non-urinary Gram-negative bacteria, PHOENIX UNMIC/ID403 for Gram-negative isolates from urine cultures, and PHOENIX PMIC/ID88 for MRSA and VRE. The strains were phenotypically confirmed for β-lactamase production by the ESBL+AMPC Screen Kit and the KPC + MBL Confirm ID Kit (Rosco Diagnostica A/S, Denmark). Bacterial isolates collected from the LTCF in Milan were screened for bla KPC -, bla VIM -, bla OXA-48 -and bla NDM -type genes by the Cepheid GeneXpert System and confirmed by PCR. Check-MDR CT103 XL array (Check points Health B.V., Wageningen, The Netherlands) has been used to investigate the bla gene content of a carbapenem-resistant P. aeruginosa strain obtained from an oropharyngeal swab, which tested negative by previous molecular assays. For gene sequencing, PCR products were purified using the quantum Wizard® SV Gel and PCR Clean-Up System (Promega, Madison, WT, USA) and subjected to double-strand Sanger sequencing. Sequences were analyzed according to the BLAST software [21]. Statistical analysis A significance level of p ≤ 0.05 was used. In-house physicians reviewed hospital records and, using a standard questionnaire, recorded demographic and clinical data as follows: patient age, gender, length of stay, Barthel immobility score, coma, comorbidities (dementia, urinary incontinence, diabetes, cancer, vascular diseases, chronic obstructive pulmonary disease, decubitus ulcer), presence of infection, antibiotic treatment in the preceding 3 months and the presence of indwelling medical devices. The significance of differences in risk factors and colonization proportions was calculated using the proportion comparison test. Logistic regression analyses were developed to investigate colonization of at least one site with ESβL producers and MRSA as dependent variables, first as univariate and then as multivariate models, including predictors with p < 0.05 in the univariate analysis, comprising the specific LTCF of residence, using stepwise logistic selection. Analysis were performed using the Medcalc® software version 15.11.4 (MedCalc software, Ostend, Belgium). Table 1. In total, ten carbapenemase-producing Enterobacteriaceae were detected: n = 7 KPC-producing K. pneumoniae and n = 1 VIM-1-producing E. cloacae complex were isolated from LTCF residents in Milan, and n = 2 VIM-1 producers (one E. coli and one Citrobacter amalonaticus) from residents in Bolzano. Two carbapenemase-positive P. aeruginosa were isolated from LTCF residents in Piacenza: in one case a bla GES-5 and in the other a bla VIM -like gene were identified. Moreover, two P. aeruginosa isolates collected in Milan and Piacenza presented a bla GES-1 ESβL. Nine bla OXA-23 -positive A. baumannii were isolated from two and seven LTCF residents in Milan and Piacenza, respectively. MRSA strains were most frequently isolated from LTCF residents in Milan and Bolzano, whereas VRE isolates were highly prevalent in Milan (n = 21 Enterococcus faecalis), but rare in Piacenza (n = 1 E. faecalis) and Bolzano (n = 1 Enterococcus faecium). Colonization of LTCF residents with ESβL-producing enterobacteria and MRSA was associated with several risk factors in univariate and multivariate analysis ( Table 3). In multivariate analysis, the LTCF of residence was an independent risk factor for ESβL (p ≤ 0.03 for all comparisons, except p = 0.53 for the comparison of Milan vs. Piacenza) and MRSA (p ≤ 0.02 for all comparisons) colonization. Risk factors for MRSA colonization were also associated with resident's gender; for the following risk factors significant differences between male (n = 226) and female (n = 114) residents were found: blaCTX-M-group-9 6.7 5. Discussion The study evaluated the degree of colonization with drugresistant bacteria among residents of LTCFs located in three Northern Italian Provinces, finding high colonization of residents in Milan (75.0%), Piacenza (69.4%) and Bolzano (66.1%). Many residents had more than one target organism, underscoring the role of LTCFs as a reservoir for these isolates [2][3][4]. Colonization of LTCF residents with ESβL-producing enterobacteria was highly prevalent in all the surveyed LTCFs (60.5% in Milan, 66.1% in Piacenza and 53.0% in Bolzano), and group-1 CTX-M-type enzymes were highly predominant, especially in E. coli (80-97% of isolates). Notably, about 82% of K. pneumoniae and 32% of P. mirabilis isolates also harbored a bla CTX-M -type gene. In the same Bolzano LTCF, here screened for ESβL-producing enterobacteria, high colonization percentages, equal to 64.0 and 49.0%, were previously found in 2008 [22] and 2012 [23], respectively; the latter survey also screened a second LTCF in the Province of Bolzano, showing a colonization prevalence of 56.0%. In an Italian study carried out in 2006, a colonization prevalence of 54.0% was found in LTCF residents bearing a urinary catheter [24], while a more recent multicenter study, performed in 2015 and involving 12 Italian LTCFs, reported a mean ESβL colonization of 57.3% (range: 32.8-81.5%) [25]. In all these Italian studies, CTX-M enzymes were the predominantly produced ESβLs. The high ESβL colonization rates of > 50% in Italian LTCF residents are paralleled by high ESβL prevalence in invasive E. coli isolates [26]. Generally, ESβL carriage in most European countries is strikingly lower than that found in Italy [4], with exceptions reported from Ireland [27,28] and Portugal [29]. In our screening study, high-level AmpC-producing Enterobacteriaceae were rarely isolated in LTCF residents in Milan and Piacenza, but 24.3% of LTCF residents in Bolzano were colonized by M. morganii expressing a high-level DHA-AmpC phenotype; bla DHAtype genes in LTCF isolates have previously been found in a few E. coli and K. pneumoniae strains from Korea [30], but to our knowledge have not yet been reported in Italian LTCFs. Carbapenemase-producing enterobacteria were not found in LTCF residents in Piacenza, rarely in Bolzano (1.6%) and more frequently in Milan (7.6%). As found in previous studies of carbapenemase-producing Enterobacteriaceae from Bolzano [22,23,31], the VIM-1producing E. coli and C. amalonaticus isolates from residents in this study were also positive for bla SHV-12 . In the present study, all carbapenemase producers from Milan, except an E. cloacae complex isolate expressing a bla VIM-1 gene, had KPC-type enzymes; similar results have been reported by other Italian studies in LTCF residents [25,32,33]. Carbapenemase-producing enterobacteria, especially KPC-producing K. pneumoniae, are epidemically spread in Italy [34] and the emergence of this MDR phenotype in LTCFs is worrying, expanding the reservoir of this health care threat. Nevertheless, as previously summarized [4], carbapenemase-producing Enterobacteriaceae are still rare in Italian LTCF residents; the reasons are probably multifactorial, comprising clinical characteristics of the enrolled residents [35] and the low carbapenem selective pressure in LTCFs. On average, only 1.1% of residents enrolled in our screening study received carbapenems within the previous 3 months (data not shown). Nevertheless, a carbapenemase-producing enterobacteria prevalence of 7.6% (mainly KPC-producing K. pneumoniae), reported here for the LTCF in Milan, gives rise to concern and has to be addressed by future hygiene and antibiotic stewardship measures. This study shows the emergence of carbapenemaseproducing P. aeruginosa in LTCF residents in Piacenza, identifying single isolates with bla VIM -type and bla GES-5 determinants. P. aeruginosa expressing bla VIM -type determinants is widely spread in Italy [36], and an outbreak of GES-5-producing P. aeruginosa was reported from a LTCF in Japan [37]. Moreover, the ESβL genes bla GES-1 and bla BEL -like were found in two and one P. aeruginosa isolates, respectively; the latter rarely detected βlactamase was previously recovered in P. aeruginosa strains from Belgium [18]. A. baumannii producing OXA-23 carbapenemases have an epidemic diffusion in Italy [38], reflected in the present study by the isolation of this resistance type from LTCF residents in Milan (1.9%) and Piacenza (5.8%). MRSA colonization prevalence here reported ranged widely in the surveyed LTCFs (5.7, 14.8 and 24.0% in Milan, Piacenza and Bolzano, respectively), similar to other Italian studies [25,39,40]. Varying MRSA colonization prevalence, ranging from close to zero up to levels higher than 37%, has been reported in European studies [4]. Colonization by VRE in the present study was highly variable, ranging from 0.8 to 20.2%. VRE-carriage in European LTCF residents was found to be low, ranging from 0.0-3% [28,41,42]. For Enterobacteriaceae significant differences in colonization frequencies of LTCF residents were found: i) for CTX-M-type ESβL-producing E. coli between Piacenza (highest prevalence) and Bolzano, ii) for high-level AmpCproducing M. morganii (highest prevalence in Bolzano), iii) for carbapenemase producers, with highest prevalence in Milan, iv) for carbapenemase-producing A. baumannii, showing highest prevalence in Piacenza, and v) for MRSA and VRE, most prevalent in Milan. Therefore, no clear picture of general colonization differences can be deduced from overall colonization prevalence data. A variety of risk factors for MRSA and ESβL colonization have previously been reported [4]; many of these have also been analyzed in the present survey. Interestingly, male residents carried a more than double risk for MRSA carriage when compared with female residents, probably because of the higher frequencies of other risk factors in males (administration of any antibiotic within the previous 3 months, hospitalization within the previous 12 months and coma), predisposing men rather than women to MRSA acquisition. Moreover, in our study the trend for an inverse correlation (p = 0.09) between age > 85 years and MRSA prevalence was associated with a significantly lower percentage of male residents > 85 years, compared to females; similar results have been found by other authors [43]. In the present survey, administration of cephalosporins during the previous 3 months resulted to be an independent risk factor for ESβL colonization; the LTCFs in Piacenza registered the highest consumption of cephalosporins, correlating with highest ESβL prevalence in LTCF residents from Piacenza. Other independent risk factors for ESβL colonization were physical disability, the presence of any invasive medical device and cancer. Whereas no significant differences were found between residents in the three Provinces for cancer as risk factor, physical disability and the presence of any medical device showed highest prevalence in the LTCF in Bolzano; nonetheless, LTCF residents in Bolzano had the lowest ESβL prevalence in the present screening study. Therefore, further factors may have contributed to the observed differences, comprising staff/resident ratio and practiced hygiene and infection control measures [44]. The LTCF in Bolzano showed the highest staff/resident ratio, and understaffing has been shown to be a risk factor for colonization of LTCF residents by MDR organisms [2]. All of the surveyed LTCFs in the present study follow hygiene, infection prevention and control measures according to guidelines of The Society for Healthcare Epidemiology of America (SHEA) and The Association for Professionals in Infection Control and Epidemiology (APIC) [45]. Nonetheless, the Bolzano LTCF had introduced enforced hygiene measures, according to the World Health Organization guidelines [46], after the 2008 screening study, showing an ESβL colonization prevalence of 64.0% in LTCF residents [22]; colonization frequency decreased significantly to 49.0% (p = 0.02) in 2012 [23], arriving at a slightly higher percentage of 53.0% in 2016, but other factors such as changed case mixes and risk factors may also have contributed to this decrease in ESβL prevalence [23]. Significant differences in antibiotic resistance epidemiology of blood culture isolates, used as a proxy for the general local antibiotic resistance epidemiology, were registered, as derived from European Antimicrobial Resistance Surveillance Network (EARS-Net) data for 2016 [26]. Specifically, we found the following antibiotic resistance data referred to the geographic regions of Milan, Piacenza and Bolzano, respectively: E. coli third generation cephalosporin-resistant: 22 Moreover, the snapshot approach used in this study might lead to the sudden increase in prevalence of a specific resistance phenotype, as shown for high-level AmpC-producing M. morganii detected in 2016 from Bolzano LTCF residents [5], which could be a transient phenomenon. Similarly, the high prevalence of VRE in LTCF residents from Milan could be due to a transitory local epidemic event. Finally, the local circulation of highly transmissible clones, for example ESβL-producing E. coli, KPCproducing K. pneumoniae and OXA-23-producing A. baumannii could contribute to the explanation of the here reported screening results [38,47]. This study has some limitations. First, it has been done in only four LTCFs, located in three different Provinces in Northern Italy, and therefore data may not be extrapolated to other Italian LTCFs with differing characteristics. Second, the number of LTCF residents participating in the study was variable, ranging from 34% in Milan up to 100% in Bolzano. Third, we did not use an enrichment step during the laboratory analysis; this limitation is partially compensated by using 4-5 different specimen types for the screening of MDR bacteria. Fourth, different sample types, types of media and laboratory methodologies have been used in the three laboratories processing the samples from the different LTCFs. Fifth, molecular characterization and typing of isolates in the 2016 study was limited, not including pulsed-field gel electrophoresis (PFGE) and sequence typing (ST) of isolates and therefore not permitting the identification of epidemic clusters. Finally, screening of healthcare workers has been done only in one of the enrolled LTCFs [5], but not in the other surveyed facilities. Despite these limitations, the strength of our study is the comparison of colonization prevalence between LTCFs located in three different Provinces, comparing it also with differences in risk factors for colonization and in the local epidemiology of invasive isolates. Conclusions We performed a multicenter point-prevalence study in LTCFs located in three different Provinces in Northern Italy and found high colonization prevalence of LTCF residents for MDR organisms, especially ESβL-producing E. coli. Variability between the different facilities was noticeable also for other MDR organisms. Differences can be partially explained by i) differences in risk factors for colonization by MDR organisms, ii) changes in resident populations and staff/resident ratios, iii) applied hygiene measures and iv) differences in the local epidemiology of antibiotic resistance of clinical isolates. This widespread diffusion of MDR bacteria in LTCFs of three Italian Provinces confirms that these healthcare facilities are an important reservoir for MDR organisms. Future efforts should focus on screening activities, infection control strategies tailored on the complex aspects of LTCFs and implementation of antibiotic stewardship programs. Additional file Additional file 1: Table S1.
4,411.4
2018-03-06T00:00:00.000
[ "Medicine", "Biology" ]
Consecutive 5- and 3-amide linkages stabilise antisense oligonucleotides and elicit an efficient RNase H response Antisense oligonucleotides (ASOs) are short (B20mer) chemically modified oligomers that bind to their complementary RNA targets to modulate gene expression at the mRNA level. Thus, ASOs can target proteins that are considered undruggable through conventional approaches. As such, they hold enormous promise for hard-to-treat diseases as evidenced by a number of recently approved oligonucleotide (ON)-based drugs. Chemical modifications are essential to improve the serum stability and pharmacodynamic properties of ASOs, as unmodified ONs are rapidly digested by nucleases in vivo and suffer from poor cellular uptake and tissue distribution. Whilst there have been considerable advances to modify ONs either at the nucleobase, sugar or backbone, a set of distinct chemical modifications to confer ideal drug-like properties has not yet been achieved. Commonly used ribose modifications include 20-F, 20-OMe, 20-O(2-methoxyethyl) and locked nucleic acids, all of which have been shown to improve target affinity and serum stability. The most commonly used phosphodiester (PO) mimic is the phosphorothioate (PS) linkage which is compatible with ribonuclease H (RNase H) activation, a mechanism resulting in degradation of an mRNA upon formation of an ASO:mRNA heteroduplex. PS linkages also improve metabolic stability and enhance pharmacodynamic properties through interactions with plasma proteins. However, unspecific protein binding can contribute to the toxic potential of PS-ASOs and the PS linkage is P-chiral resulting in a mixture of diastereomers (more than half a million isomers in Mipomersen). Moreover, inefficient cellular uptake remains a major challenge for ASO therapeutics. Therefore, the investigation of other artificial backbone linkages is urgently needed. Charge-neutral backbone modifications represent an interesting class of PO mimics. Among those, (thio)phosphonoacetate esters, phosphotriesters and alkyl phosphonates can enhance cellular uptake by eliminating the PO negative charges. Moreover, incorporation of a single methylphosphonate can eliminate hepatotoxicity of PS-ASOs. Phosphorodiamidate morpholino oligomers (PMOs) combine backbone and sugar modifications and enhance delivery through interactions with scavenger receptors, but are not compatible with standard ON synthesis. All aforementioned PO mimics also suffer from increased steric complexity due to their P-chiral linkages. The absence of a chiral centre and the well-established solidphase peptide synthesis methods make the amide internucleoside linkage (AM, Fig. 1A) a promising candidate for backbone surrogates. Within DNA, isolated amides can slightly increase duplex stability with target RNA while consecutive amides were reported to have minimal effects. Amide-modified ONs are stable in serum and the backbone is well tolerated in replication, transcription and RNAi but its application in the context of RNase H activation has not been reported. Here we report the introduction of isolated and consecutive amide linkages into ONs for RNase H-based antisense applications. We discuss possible ASO designs that contain the AM backbone to induce RNA target degradation and we evaluate serum stability and cellular uptake of a partially uncharged AM-gapmer ON. Isolated AM internucleoside linkages were introduced via dinucleotide phosphoramidite 6 (Fig. 1A) with an internal amide as the building block for standard ON synthesis. Preparation of phosphoramidite 6 required 50-tritylation of monomer 1 followed by ester hydrolysis to obtain acid 3. Amide coupling with amine 4 resulted in dimer 5 and subsequent phosphitylation gave 6 which can be used in standard ON synthesis. Phosphoramidite 6 was then used to synthesise ASOs (ON2, ON4, ON5, Fig. 1B) a Chemistry Research Laboratory, University of Oxford, Oxford, OX1 3TA, UK. E-mail<EMAIL_ADDRESS>b Chemistry Branch, Department of Science and Mathematics, Faculty of Petroleum and Mining Engineering, Suez University, Suez 43721, Egypt † Electronic supplementary information (ESI) available. See DOI: 10.1039/ d0cc00444h Received 16th January 2020, Accepted 5th March 2020 potential of PS-ASOs 16,17 and the PS linkage is P-chiral resulting in a mixture of diastereomers (more than half a million isomers in Mipomersen 12,18 ). Moreover, inefficient cellular uptake remains a major challenge for ASO therapeutics. Therefore, the investigation of other artificial backbone linkages is urgently needed. Charge-neutral backbone modifications represent an interesting class of PO mimics. Among those, (thio)phosphonoacetate esters, 19 phosphotriesters 20 and alkyl phosphonates 21 can enhance cellular uptake by eliminating the PO negative charges. Moreover, incorporation of a single methylphosphonate can eliminate hepatotoxicity of PS-ASOs. 22 Phosphorodiamidate morpholino oligomers (PMOs) combine backbone and sugar modifications and enhance delivery through interactions with scavenger receptors, 23 but are not compatible with standard ON synthesis. All aforementioned PO mimics also suffer from increased steric complexity due to their P-chiral linkages. The absence of a chiral centre and the well-established solidphase peptide synthesis methods make the amide internucleoside linkage 24 (AM, Fig. 1A) a promising candidate for backbone surrogates. Within DNA, isolated amides can slightly increase duplex stability with target RNA 24 while consecutive amides were reported to have minimal effects. 25 Amide-modified ONs are stable in serum 24 and the backbone is well tolerated in replication, transcription 26 and RNAi 27 but its application in the context of RNase H activation has not been reported. Here we report the introduction of isolated and consecutive amide linkages into ONs for RNase H-based antisense applications. We discuss possible ASO designs that contain the AM backbone to induce RNA target degradation and we evaluate serum stability and cellular uptake of a partially uncharged AM-gapmer ON. Isolated AM internucleoside linkages were introduced via dinucleotide phosphoramidite 6 ( Fig. 1A) with an internal amide as the building block for standard ON synthesis. Preparation of phosphoramidite 6 required 5 0 -tritylation of monomer 1 25 followed by ester hydrolysis to obtain acid 3. Amide coupling with amine 4 28 resulted in dimer 5 and subsequent phosphitylation gave 6 which can be used in standard ON synthesis. 24 Phosphoramidite 6 was then used to synthesise ASOs (ON2, ON4, ON5, Fig. 1B) containing isolated AM linkages at various positions. The design of the tested ASOs was based on three consecutive interactions inside the PO binding pocket of RNase H with the ASO. 12,29 ON2 contains three isolated AM linkages which are interspaced by two consecutive POs. Efficient RNase H response mediated by ON2 would confirm that the AM linkage could be accommodated inside the binding pocket of RNase H. The same rational was applied to ON4 and ON5 whose gapmer design further narrows the window for RNase H activation. ON1 (dT 12 ) and gapmer ON3 with flanking 2 0 -OMe wings and a central dT 6 region served as positive controls. ON1-ON5 were then tested to induce RNase H-mediated degradation of a fluorescein (FL)-labelled target RNA (FL-RNA1). However, no cleavage of the target RNA was observed for ASOs containing the AM linkage (ON2, ON4 and ON5), suggesting that the AM linkage is not tolerated within the PO binding pocket of RNase H. AM-PO chimeras with sections of consecutive AM linkages were synthesised by adapting published protocols (Fig. 2). 25,[30][31][32] AM-coupling (i) of acid 7 25 to a 5 0 -amine forms the AM bond and deprotection (ii) and successive AM-coupling (i) of monomer 7 builds up sections of nucleosides consecutively linked through the AM linkage (AM-cycle). Introduction of a 5 0 -OH was achieved by AM-coupling (i) of monomer 3 and subsequent deprotection (ii). ON synthesis using standard phosphoramidite monomers builds up sections with PO linkages (PO-cycle). Transition from the PO-cycle to the AM-cycle can be achieved by PO-coupling (iii) of commercially available phosphoramidite 8 followed by oxidation (iv) and detritylation (ii) to introduce a 5 0 -amine as a substrate for the AM-cycle as described before. Using PyBOP and NMM as the coupling agent and base gave minimal side products and the combined PO-and AM-cycles gave chimeric gapmer ON6 with charge-neutral wings in an overall isolated yield of 16% ( Fig. 3 and Fig. S1, S2, ESI †). The crude AM-modified ONs gave clean chromatographic traces ( Fig. 3A and D) with the main peak corresponding to the desired product which was confirmed following purification ( Fig. 3B and E) and mass analysis ( Fig. 3C and F). We rationalise that the absence of a 2 0 -fuctionality can increase coupling efficiencies compared to consecutive amide couplings of more challenging RNA-type monomers. 31,32 The consecutive AM backbones in ON6 constitute the flanking wings of the ASO, a commonly used strategy to retain RNase H activity while utilising the properties of otherwise RNase H-incompatible modifications, including the 2 0 -OMe modification 33 (ON3, Fig. 1B). However, this strategy has not yet been reported for (C and F) Mass analysis. ON6: 5 0 -T a T a T a T a T p C p C p T p G p A p T p A p G p T a T a T a T a T-3 0 ; ON9: 5 0 -FL p T a T a T a T a T p C p C p T p G p A p T p A p G p T a T a T a T a T-3 0 . *Peak corresponding to desired ON. For full traces see Fig. S1 and S2 (ESI †). AM backbones. Thus, ON6 was tested in the RNase H assay while ON7 and ON8 ( Fig. 4 and Table 1) were used as positive controls to induce degradation of a 5 0 -FL-labelled complementary target RNA (FL-RNA2). Aliquots of the reaction buffers containing E. coli RNase H, FL-RNA2 and a catalytic amount of ASOs (ON6, ON7 or ON8) were quenched at different time points and analysed by gel electrophoresis ( Fig. 4 and Fig. S3, ESI †). The gels show that all tested ASOs induce complete target degradation within 30 min at 37 1C. Quantification confirmed that AM-gapmer ON6 activates RNase H as efficiently as the controls ON7 and ON8. This is a clear improvement on previously reported gapmers with charge-neutral wings using peptide nucleic acids (PNAs) which only induce target degradation in a non-catalytic way. 34 Watts et al. reported that improved gapmer designs and optimisation of linkers between PNA and DNA sections can lead to catalytic activity but the charge-neutral section was limited to only one wing. 35 In comparison, AM-gapmer ON6 shows efficient catalytic degradation of the target RNA. No correlation was observed between the cleavage rate of an RNA target by RNase H with ON6-8 and the measured melting temperatures (T m s, Table 1 and Fig. S4, ESI †). The 2 0 -OMe modifications in gapmer ON7 increased target affinity to RNA by +2.5 1C (+0.25 1C per modification (mod)) whereas the amide backbone in ON6 slightly decreased duplex stability with RNA by À1.5 1C (À0.19 1C per mod) compared to unmodified ON8. This is consistent with the literature for both modifications. 11,25 A flexible 4 0 -endo sugar pucker, a rigid sugar phosphate backbone, and a duplex conformation between A and B are essential for an efficient RNase H response. 36 In this context, we investigated the structural changes in duplexes induced by the AM backbone. We did not observe significant perturbation of the duplex structures formed by ON6 with DNA or RNA targets when compared to ON8 (an unmodified DNA ON) in circular dichroism experiments (Fig. S5, ESI †), which is consistent with the high efficiency of ON6 in inducing an RNase H response. Enzymatic stability of ASOs is important to ensure optimal biological half-life and therapeutic efficacy. The terminal amides in ON6 result in enhanced nuclease resistance and longer serum lifetime in foetal bovine serum (FBS) compared to ON7 and ON8 which were both rapidly degraded (Fig. S6, ESI †). As previously reported, 37 the 2 0 -OMe modifications in ON7 slightly extend serum lifetime but are not as effective as the amide bonds in ON6. Potential enhanced cellular uptake of the AM-gapmer design was evaluated by confocal laser scanning microscopy (CLSM). For this experiment, ON6 was fluorescently labelled at the 5 0 -end to give ON9 (Fig. 3D-F) in which eight PO linkages were replaced by charge-neutral AM bonds and the ON has a charge-to-linkage ratio of 0.56 (including the PO linkage between FL and the 5 0 -end). No solubility issues were encountered with this oligomer. ON10 and ON11 represent the fluorescently labelled derivatives of ON7 and ON8 respectively in which all 17 internucleoside linkages consist of negatively charged PO bonds, with an additional PO bond for the attachment of FL (Fig. 5). HeLa cells were incubated with 5 mM ON9-ON11 in serum-free medium and analysed by CLSM after fixation. AM-gapmer ON9 showed increased intracellular accumulation compared to ON10 and ON11 for which a distinct fluorescent signal inside the cell was absent (Fig. 5 and Fig. S7 for a general view, ESI †). The localised fluorescent signals observed for ON9 (Fig. 5A) suggest that the uptake mechanism leads to its partial entrapment within subcellular compartments. Co-incubation of ON9 with fluorescently tagged epidermal growth factor (EGF) shows that only a low number of FL signals were co-localised with stained endosomes (Fig. S8, ESI †). Moreover, a clear increase of punctuated fluorescence for ON9 was only detected after incubation for 16 h while receptor-mediated endocytosis is known to happen within o30 min. Together, our preliminary results suggest that ON uptake may be occurring through fluid phase endocytosis which results in unspecific uptake from the extracellular fluid into vacuoles, while their fate upon maturation can be highly variable. 38 These observations are also in agreement with reports that fully charge-neutral methyl phosphonate-modified ASOs enter the cell via fluid phase endocytosis after 10-12 hours of incubation. 21 Similar results showed enhanced cellular uptake of ASOs in which half the charge was neutralised by phosphotriesters. 19 Table 1. In conclusion, different ON chemistries and their effect on pharmacodynamics, pharmacokinetics, cellular delivery and toxicity are poorly understood and backbone modifications have been mainly focused on the PS linkage, leaving other chemistries underexplored. The recently reported applications of charge-neutral backbones to enhance cellular uptake [19][20][21] and mediate toxicity 22 emphasise the importance of exploring other artificial backbone structures. Here we report the partial replacement of the natural ON PO backbone by an amide internucleoside linkage and its effects on antisense activity, target engagement, serum stability and cellular uptake. We show that the AM linkage is well tolerated in the wings of an ASO gapmer design while retaining an efficient RNase H response. The stability of the formed duplex is only slightly decreased (À0.19 1C per mod) compared to unmodified DNA, and this does not compromise target engagement and RNA cleavage efficiency. The ease of chimeric gapmer design and synthesis with reduced steric complexity, retained catalytic efficiency, high serum stability and potentially enhanced cellular uptake all add desirable drug-like properties. Thus, the AM backbone mimic represents a valuable candidate for further development. Conflicts of interest There are no conflicts to declare. Notes and references Fig. 5 HeLa cells were incubated with 5 mM ON9-ON11 in serum-free medium for 16 h at 37 1C. Cells were fixed and nuclei stained with 4 0 ,6-diamidino-2-phenylindole (DAPI) before CLSM analysis. Scale bar: 25 mm. ON9: 5 0 -FL p T a T a T a T a T p C p C p T p G p A p T p A p G p T a T a T a T a T-3 0 ; ON10: 5 0 -FL p À U p À U p À U p À U p À U p C p C p T p G p A p T p A p G p À U p À U p À U p À U p À U-3 0 ; ON11: 5 0 -FL p T p T p T p T p T p C p C p T p G p A p T p A p G p T p T p T p T p T-3 0 .
3,647
2020-04-15T00:00:00.000
[ "Chemistry", "Medicine" ]
Ermakov-Pinney equation for time-varying mass systems We extend Fring-Tenney approach of constructing invariants of constant mass time-dependent system to the case of a time-dependent mass particle. From a coupled set of equations described in terms of guiding parameter functions, we track down a modified Ermakov-Pinney equation involving a time-dependent mass function. As a concrete example we focus on an exponential choice of the mass function. Introduction For some time now, there have been several efforts to study time-dependent quantum systems from different perspectives. One such consists in the use of optical traps (see for some representative articles [1][2][3][4][5]). In the non-relativistic context, the governing equation of concern is the time-dependent Schrödinger equation with the explicit presence of an optical potential. It is usually taken to be in a separable form which is made up of a time-dependent modulation term together with some trapping shape described by a spatial profile [6,7]. Interest in exploring the dynamics of time-dependent quantum systems started with the method of seeking Hermitian invariants [8][9][10] for them. Recently, Fring and Tenney [11,12] made a systematic study of time-independent approximations for a class of time-dependent optical potentials. Their general strategy was to write down an approximate Lewis-Riesenfeld scheme [9] and explore, among other things, time-dependent potentials with a Stark term. While constructing time-independent invariants, even for non-Hermitian systems [12][13][14], one of the central issues was to derive the highly nonlinear Ermakov-Pinney (EP ) equation [15,16]. Interestingly, they could obtain a regular solution to it, determine the electric field and finally construct the invariants. Note that they concentrated only on a constant mass system. For other approaches towards calculating dynamical invariants we refer to [17]. Historically, derivation of the EP equation (see [18] in which some symmetry aspects of this equation were explored) was first undertaken in an early work of Ermakov [15] while studying integrablity of certain classes of ordinary second-order differential equation We see that the EP equation describes an oscillator in the presence of an inverse-cube force term. Much later, Pinney [16] provided the solution and gave a complete representation of it in the form where r, s are both functions of t and stand as the independent solutions of the standard harmonic oscillatorẍ + ω 2 x = 0 while A, B, C satisfy the constraint AC − B 2 = h 2 W , W being the Wronskian. The purpose of this article is to pursue an analog procedure as that of [11] but consider a time-dependent mass instead of a constant one. The variation of the mass with time is a well studied topic in problems of unstable particles [19,20] in the realm of particle physics. A modification to the form of the Dirac equation in seeking invariants has also been considered for time-dependent masses in Dirac motivated relativistic systems [21]. Cases of both time-dependent and position-dependent mass have also been explored in the literature (see, for instance, [22,23]). In the following, we basically utilize invariants to solve the time-dependent Schrödinger equation when the guiding time-dependent potential is factorizable 1 in terms of a Gaussian profile apart from a time-dependent piece. We see that a modified EP-equation emerges as an auxiliary equation whose form differs from what is obtained in the constant mass case. However, its highly nonlinear nature precludes us from finding any exact analytical solution. We have therefore gone for a specific case-study of the mass profile as given by an exponential representation. It reduces the basic equations to a tractable form thus facilitating a simple numerical assessment. This work is organized as follows: In section 2, we highlight the consistency conditions that emerge when the basic equation for an invariant Hermitian operatorÎ is subjected to the Hamiltonian obeying the time-dependent Schrödinger equation. In section 3 we derive a modified EP equation by installing time-dependence in the mass. Such an equation constitutes our subsequent point of analysis which is taken up numerically. In Section 4 we present a particular class of solution corresponding to the exponential choice of the mass. Finally, in section 5, we present a summary. Constraining equations for time-dependent coefficients Let us begin with the time-dependent non-relativistic Schrödinger equation which in one-dimension reads where m = m(t). We work with natural units = c = 1. For a general quantum mechanical HamiltonianĤ, the time evolution operator U (t, t 0 ) obeys [24] iU (t, t 0 ) =ĤU (t, t 0 ), U (t 0 , t 0 ) = I, t > t 0 (2.2) If the Hamiltonian does not depend on time then the solution of the above equation is simply where Hamiltonian is considered to be Hermitian. Against the background of the evolution equation obeying (2.2), we note that the time-dependent invariant operatorÎ(t) satisfies where H is time-dependentĤ =Ĥ(x, t). As a concrete model of our analysis we consider the following representative Hamiltonian where the mass and the potential have been taken to be time-dependent. The latter is assumed to be factorizable Additionally, a Stark like term is included in (2.5) which points to the presence of an electric field E(t). The angular frequency ω is taken to be constant. The choice of the representation for the invariant is flexible. In the following we adopt the formÎ where {x, p} = xp + px and the coefficient functions α, γ, δ and ε are time-dependent functions. Of course, these are distinct from m(t) or κ(t) and have to be solved for through the consistency equations. We now address the simplest class of time-dependent Hamiltonian as induced by a varying mass harmonic oscillator potential. In such a case κ corresponds to m(t) while V (x) = 1 2 ω 2 x 2 . Substituting in (2.4) we obtain through the use of (2.5) the commutator When substituted in (2.4) the following set of conditions is easily obtaineḋ We also have an additional relation γ = 2mαE (2.12) along with a closed form for E(t) given by We should mention that in the constant mass case, the set of equations (2.8) − (2.12) reduce to those of the corresponding ones in [11]. Ermakov-Pinney equation and its modification We aim at writing down a modified EP equation in the presence of m(t). Redefining α = σ 2 the remaining parameters read Introducing τ ≡ τ (σ) as an arbitrary function of σ which is defined through it is clear that (3.4) stands as the signature of a dissipative EP equation for the time-varying mass caseσ We immediately see that when m is stationary and τ is a constant equal to λ 2 the standard equation ( Identifying the EP equation in the extended form (3.5) constitutes the central result of this note. Although not exactly coincident with (3.5), a dissipative EP equation was also derived by Fring and Tenney [12] but in a different contexẗ where f is a time-dependent parametrizing function. f seems to play the analog of the mass function but notice that while the coefficient of σ in (3.5) is a constant, the same in the corresponding (3.7) is a function of time. Our next endeavour would be to analyse (3.6) by taking its time-derivative and comparing with (2.10). Towards this end we obtain the pair of equationṡ where (3.1) is used. Equating them we straightforwardly obtain a differential equation for τ τ + 2ṁ mσ τ = 0 (3.10) where the prime denotes a derivative with respect to σ. Solving the modified EP equation Since τ = τ (σ) let us assume a monomial form for it where τ 0 is a constant assumed to be > 0. The EP equation, in principle, admits of complex solutions given by the general form where ξ and η are linked by a set of coupled equations where Ω 2 = (ω 2 − τ 0 ). These are very complicated equations to tackle. So we concentrate on real solutions only and in this regard consider a mass function m(t) of the exponential type m(t) = m 0 e −qt , q > 0 (4.5) where its damping character is in keeping with [21,22] but with respect to the t-variable. For σ we find the solution σ(t) = e qt/2 (4.6) subject to the the following constraint on τ 0 τ is insensitive to the sign of q. For completeness we note that δ turns out to be a constant while the coefficient function γ and the electric field E show a decaying behaviour e −qt/2 . We display the behavior of the mass variation in Figure 1 corresponding to different values of ω. We took the input value of τ 0 = 0.01. The results reflect a typical exponential trajectory of m. Summary In this article we considered the possibility of introducing a time-dependent mass while constructing an invariant for a time-dependent quantum Hamiltonian. Assuming a timedependent harmonic oscillator with a Stark-like Hamiltonian we were able to derive the invariant conditions in their entirety. We then obtained the time-dependent mass version of the EP equation that included an additional term as compared to the constant-mass case. Because of its strongly nonlinear character we inquired into simple parameter variations to assess the mass function. For the specific case of a damping mass with a certain input value of τ 0 , its graphical behavior was plotted over a wide range of t. Acknowledgment We are indebted to Andreas Fring and Rebecca Tenney for pointing out errors in the previous version of the draft.
2,207.8
2021-03-19T00:00:00.000
[ "Physics" ]
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. Introduction The provision to pedestrians of location and orientation information, which in some way can be used for simplifying the task of reaching a particular destination, is called pedestrian navigation. The versatility of environments in which the pedestrian navigation system has to work, differentiates it from land vehicle, sea vessel and aircraft navigation. Of all the possible environments in which the PNS has to work, the urban and indoor environments are the most challenging ones. These challenges arise from the amount and reliability of information available for estimating the navigation parameters. This diversity of environment for pedestrian navigation imposes the use of self-contained sensors for navigation that can provide users with navigation parameters irrespective of the availability of external aids. Pedestrian Navigation Techniques Pedestrian navigation can be either accomplished using some man-made information source or by measuring the planetary/universal forces. Radio Frequency (RF) signals are the most widely used man-made information sources for pedestrian navigation [1][2][3][4]. The main problem of RF information sources with respect to pedestrian navigation is their reliable availability in all environments. This leads one to the use of planetary/universal information sources for estimating the required parameters for pedestrian navigation. Systems incorporating sensors that can measure planetary/universal forces that can be used for navigation purposes are known as self-contained navigation systems. A well known navigation methodology, namely an Inertial Navigation System (INS), is normally integrated with some extra sensors for such systems. The sensors used for INS mechanization are gyroscopes and accelerometers. INS can be very accurate and reliable depending on the quality of sensors used, but, in the context of pedestrian navigation where cost, size and power consumption dictate sensor selection, these systems are of the lowest accuracy and reliability [5]. In order to improve the navigation solution of a self-contained system, other aiding sensors/information sources are utilized, which can be categorized into the use of additional physical measurements or the specificities of the human walk, i.e., its biomechanics. The use of sensors additional to gyroscopes and accelerometers constitutes the first category. For example, with magnetometers, the measure of the Earth's magnetic field can assist the estimation of the direction of motion, which can further be used for estimating errors associated with gyroscopes [6]. But as one moves into urban and indoor environments, the Earth's magnetic field gets perturbed from the man-made infrastructure, rendering it rather useless for absolute orientation estimates. The second category comprises the use of special constraints dictated by the biomechanical description of the user's dynamics [7,8]. These include locomotion models, ZUPT and ZARU which can take place whenever the user is stationary or detection of certain events that can be used in conjunction with some gait modeling parameters for estimating the displacements per step. The latter is also known as PDR and is the primary research focus of the pedestrian navigation research community [9]. The main limitations of these techniques are the placement of sensors on the pedestrian's body. Researchers have concluded that more than 87% of times, the user normally holds the smart-phone in hand while using it for navigation applications [10]. As it is highly unlikely to have zero acceleration or angular rate periods with the sensor block held in hand, this results in a very few ZUPT of ZARU periods, thus rendering the use of special constraints for sensor error estimation rather useless for smart-phone based pedestrian navigation. This leaves one with the PDR approach, which although it can provide promising results for the propagation of location information in various environments, it is unable to solve for the orientation problem, which causes an error growth of third order in position estimates [11]. As is evident from the above discussion, a number of approaches can be taken to target pedestrian navigation. Only one approach seems feasible for seamless navigation in all the pedestrian navigation environments, namely the PDR approach, which utilizes the self-contained navigation systems. Assuming successful detection of gait events, the main limitation of PDR is the orientation estimation. A novel idea has emerged for mitigating the gyroscope errors and estimating the orientation parameters using the Earth's magnetic field even when it is perturbed by man-made infrastructure [12]. Consequently this new algorithm provides reliable and accurate orientation estimates in diverse and challenging pedestrian navigation environments. Section 2 introduces the Earth's magnetic field, its usefulness for orientation estimation, and the effects of indoor environments on it. Section 3 describes different approaches feasible for pedestrians' attitude estimation, whereas, Section 4 details an Extended Kalman Filter (EKF) based estimator required for mitigation of attitude and sensor errors. The novel technique utilizing perturbed magnetic field for estimating attitude and sensor errors is introduced in Section 5 and Section 6 details the measurement error models required for the EKF. Section 7 addresses the statistical analysis of the proposed mitigation technique. Finally, Section 8 is dedicated to the experimental assessment of the proposed algorithm in a real world environment. The Earth's Magnetic Field The Earth's magnetic field is a naturally occurring planetary phenomenon, which can be modeled as a dipole and follows the basic laws of magnetic fields summarized and corrected by Maxwell [13,14]. It is a three dimensional vector originating at the positive pole of the dipole, the magnetic South and ends at the magnetic North pole. For centuries, the Earth's magnetic field has been successfully used for navigation purposes. With the advancements in sensor technology, this field can now be precisely measured with the help of a sensor commonly known as the magnetometer. With the proper transformation of the field components to the horizontal plane and knowing the declination angle specific to the measurement area and time, a simple trigonometric operation estimates the geographic heading. The magnetic field vector is elaborated in Figure 1. Here H is the horizontal field component. The angle between the True North and H is called the declination angle D whereas the angle between the magnetic field B and horizontal plane is called the inclination angle I. B x , B v and B z are the three orthogonal magnetic field components. Figure 1. Earth's magnetic field in Cartesian coordinate system. Heading Estimation Using the Earth's Magnetic Field The orthogonal components of the horizontal magnetic field are used for estimating the heading. Thus the field components must first be transformed to a local level in order to find the horizontal field component of the measured Earth's magnetic field. For estimating the heading with respect to true North instead of the magnetic North, the declination angle also needs to be predicted using one of the Earth's magnetic field models [15]. After resolving the magnetic field to the local level and estimating the declination angle, a simple trigonometric relationship is used for estimating the heading from the measured Earth's magnetic field: (1) Effects of Indoor Environment on Earth's Magnetic Field Modeling of the Earth's magnetic field is possible in an indoor environment in the presence of magnetic dipoles known as magnetic perturbations [12]. These perturbations are due either to electromagnetic devices or magnetization of manmade structures in the presence of an external magnetic field, which is mostly constituted of the Earth's magnetic field. Figure 2 depicts the heading estimates for a clean and a perturbed environment, while the user walks along a straight line path. A perturbation source is modeled at around the 20 s time mark. Here it can be observed that in the close vicinity of a perturbation source, the heading estimate deviates from the nominal heading with respect to the Earth's magnetic field, which results from the change in the local magnetic field components due to the perturbation source. If one uses the magnetic heading estimates obtained using the perturbed magnetic field, the effects on pedestrian navigation solution would be adverse. For bette Earth's mag by surveying hat the effe o 130° in effectively e utilize the pe Attitude The digit respect to a attitude in th (1) [19]. From a pedestrian navigation perspective, mainly two vector measurements are available for this, namely the Earth's magnetic field vector and the Earth's gravity vector. But the navigation environments as well as the user dynamics limit the use of these vectors for estimating the attitude. As mentioned earlier, indoor environments are contaminated with magnetic field perturbations. Also, due to the walking dynamics of a pedestrian, with the sensor block in hand, the specific forces measured by the accelerometers will not correspond to those of the gravity vector resolved in the body frame. Therefore the approach of using only non collinear vector measurements for attitude estimation is not feasible for pedestrian navigation. Angular Rate Measurements As the vector measurements are not available for estimating the attitude at every epoch for pedestrian navigation, propagation of the attitude in time is necessary. For this purpose, an inertial sensor providing angular rate measurements, namely the rate gyroscope, can be used [20]. Considering the advantages of representing the rotations using a quaternion [12], the quaternion derivative is used for estimating the attitude using angular rate measurements herein, which is given by: where are the three angular rate measurements in the body frame obtained using the rate gyroscopes. q 1 , q 2 , q 3 and q 4 are the four elements describing the quaternion q. In Equation (3), the negative angular rate vector emphasizes the assumption that the transport rate (rate of change of navigation frame) is negligible [20]. This assumption is valid for pedestrian navigation applications involving traveled distances of a few kilometers. As the angular rates required for computing the quaternion derivative are provided by the MEMS rate gyroscopes, the errors associated with them introduce errors in the estimated attitude. The gyroscope errors as well as the attitude errors need to be estimated and mitigated for estimating reliable attitude angles in pedestrian navigation environments using angular rate measurements. Extended Kalman Filter for Attitude and Angular Rate Error Estimation Attitude and its error estimation constitute a non linear problem [21], which can be solved by using a number of estimation approaches. In this article, an extension of a Kalman filter based estimator, namely an Extended Kalman Filter (EKF) is used [22]. The main purpose of this estimator is to model the effects of the gyroscope errors on attitude estimates and use the magnetic field information as corrective measurements to estimate the attitude errors in general and gyroscope errors in particular, which can then be compensated from the present epoch and remodeled for the proceeding ones until new measurements are available. Detailed derivation of the said estimator can be found in a number of books [6,[20][21][22][23][24][25]. Only the appropriate states, measurements and their respective system and measurement error models are detailed herein. The State Vector As only the attitude/orientation estimation for pedestrian navigation is targeted herein, the main states to be estimated are the three attitude angles. The attitude is primarily determined using the angular rates obtained using the rate gyroscopes. The deterministic errors associated with this sensor can be compensated for using the calibration parameters [12], leaving the time varying biases as the residual errors. Thus the state vector becomes: Here, φ is the roll angle, θ is the pitch angle and Ψ is the heading angle. The time varying biases associated with the three gyroscopes are represented by , and . System Error Model The perturbed state vector representing errors in Equation (4) is given by: where is the attitude error vector in the navigation frame, which defines the small angle rotations to align the estimated local level frame to the actual one. are the errors in the inertial sensor bias estimates. As the attitude errors are periodically updated using the measurement vectors, the components of give the small angle representation of the attitude errors [20]. A small angle transformation matrix can then be used to compensate for the attitude errors from the predicted rotation matrix given by: where is the skew symmetric matrix for the vector and the circumflex accent on the rotation matrix means that it has been compensated for the attitude errors. From Equation (6), it can be shown that: The derivative of Equation (7) is: The differential equation for the rotation matrix is given by: where is the skew symmetric matrix of the angular rate vector obtained from the rate gyroscopes. Substituting Equation (9) in Equation (8) and simplifying gives: The matrix is defined by the uncompensated gyroscope measurements. Let be the skew symmetric matrix for the gyroscope measurement errors. By definition of the different matrices, can now be written as: Substituting Equation (11) in Equation (10), simplifying and neglecting the second order terms to get a relationship for yields: (12) which, in vector form, becomes: In Equation (13): (14) where is the true angular rate vector and is the angular rate estimates obtained after compensating for the sensor errors, which are estimated using calibration as well as stochastic modeling. The estimated angular rate vector can be written as: (15) where is the raw angular rate measurement vector and is the estimate of the time varying bias vector obtained from the estimator (EKF). Substituting for the gyroscope model, with being the wideband noise, and using Equation (15) in Equation (14), one gets: (16) which after simplification becomes: (17) where is the perturbation of the time varying gyroscope biases. Equation (13) now becomes: (18) The time varying gyroscope bias is modeled as an exponentially correlated noise term, resulting in the derivative of perturbations in the time varying gyroscope bias given by [20]: (19) where is the correlation time and is the noise vector for the stochastic modeling of the time varying gyroscopes' biases. Equations (18) and (19) lead to the following system dynamics model: where F is the dynamics matrix, G is the shaping matrix and w is the system noise matrix as defined in literature on Kalman filter. Measurement Error Models The measurements used for compensating the errors associated with the attitude vector in general and gyroscope biases in particular are herein solely based on the magnetic field vector measurements. In order to utilize the clean as well as perturbed magnetic field measurements, a novel technique for estimating the sensor frame's angular rates using a tri-axis magnetometer was developed and is detailed in the following section. Quasi-Static Field (QSF) Based Attitude and Angular Rate Measurements (Patent Pending) The novel idea for estimating attitude and gyroscope errors using magnetic field measurements for pedestrian navigation environments mainly involves detecting quasi-static total magnetic field periods during pedestrian motion and utilizing them as measurements for estimating attitude and gyroscope errors. Indeed when the local magnetic field is quasi-static, the rate of change of the magnetic field is combined with the rotational rate of change of the inertial device generating an estimated gyroscope error, which can be further used to correct for time-varying inherent gyroscope errors [12]. Contrary to existing solutions, this technique is working in magnetically perturbed environments as long as the field is identified as constant over a selective period of time. Further, in order to successfully detect the presence of QSF periods, proper pre-calibration of the magnetic field sensors is necessary. This is achieved by utilizing a calibration algorithm developed by the authors [26]. The QSF detector, developed using statistical signal processing techniques, is now presented. The Earth's magnetic field, though a good source of information for estimating heading outdoor, suffers severe degradations in the indoors caused by magnetic field perturbations [16]. These perturbations are of changing magnitudes and directions, which induce random variations in the total magnetic field. These variations render the magnetic field information useless for absolute orientation estimation with respect to the magnetic North in indoor environments. Although the magnetic field indoor is not spatially constant due to changing perturbation sources, depending on the pedestrian's speed and surroundings, it is possible to have locations as well as short periods (user not moving) when the perturbed magnetic field is constant in magnitude as well as in direction. The rate of change of the total magnetic field in such situations will be ideally zero. It is possible to have very slight changes in the magnitude and direction of the total magnetic field (due to sensor noise) that can still be considered as quasi-static. Thus information to be considered for detecting a QSF is the rate of change of the total magnetic field , which is referred to as the field gradient and is computed using: where is the magnetic field at the current epoch, is the magnetic field at the previous epoch and Δt is the measurement update rate. For a window of size N, a QSF detector will detect a static field if: Let the hypothesis for a non-static field be H 0 and that for a quasi-static field be H 1 respectively. The Probability Density Functions (PDFs) associated with these two hypotheses are: (23) The rate of change of the total magnetic field is also contaminated by white Gaussian noise , which, when modeled with the measurements, gives: (24) where y k is the information to be tested for H 0 or H 1 . Under H 0 , is the unknown parameter required to describe the signal completely. Therefore, for the two hypotheses, is defined as: where Ω l N: l 1 with N N and n N. As the complete knowledge about is unknown for H 0 , the PDF in this case is given by: Let the Maximum Likelihood Estimator (MLE) for the unknown parameter in case of H 0 be , which is given by the mean of the signal as: Now the PDF for becomes: For hypothesis , the rate of change of the total magnetic field is known (it will be zero), therefore the PDF in this case becomes: The Generalized Likelihood Ratio Test (GLRT) for detecting a quasi-static field is given by: Taking the natural log on both sides and simplifying yields: where is the threshold for QSF detection. Use of QSF Detected Periods for Attitude and Gyroscope Error Estimation Once the QSF periods are detected during pedestrian motion, the next step is to utilize the magnetic field information during such periods for estimating the attitude angles and the gyroscope errors. This section derives the equations required for using QSF magnetic field measurements, even perturbed, in the EKF developed in Section 4. It principally consists of a measurement error model that is used for updating the state vector of the navigation filter. QSF Measurement Error Model Using the Local Magnetic Field Let the magnetic field measurement in the sensor frame at the start, i.e., the k th epoch, of the quasi-static field period be given by: Considering the attitude at the start of quasi-static period as the reference for the measurement model, the magnetic field measurement can be transformed to the navigation frame using: is considered as a measurement during quasi-static field periods. Indeed in the novel approach the magnetic field information extracted from a geomagnetic field model is not considered as a measurement of the truth but rather the field is considered as a reference over the QSF period. As the inaccuracies in the sensor will bias the estimated attitude, the transformations of proceeding magnetic field measurements from body to navigation frame using the updated would be different from hence introducing the measurement error. This gives the relationship for the first measurement error model: (35), the measurement error model becomes: Ideally, as the magnetic field during QSF period is locally static (magnetic field vector not changing its magnitude or direction), Equation (36) should be equal to zero. But due to errors in rate gyroscope measurements, which are used for estimating the rotation matrix, the following perturbed model is obtained: is the measurement noise of the magnetometers. Simplifying (37) to get a relationship between measurements and states, one obtains: Because the last term in Equation (38) is of the second order in errors, neglecting it results in: where is the skew symmetric matrix of vector . QSF Measurement Error Model Using the Rate of Change of the Local Magnetic Field During the quasi-static field periods, the rate of change of the reference magnetic field is zero. Using this information as a measurement, one gets: Taking the derivative of Equation (41) to get the relationship between the rate of change of a vector in two different frames [28], one gets: where is the angular rate vector required for rotating the magnetic field measurements between two epochs in the sensor frame. Because the QSF periods are identified as those where the field vector in the navigation frame is not changing its magnitude and orientation, the left hand side of Equation (42) During user motion, the magnetic field components in the body frame will encounter changes, which can be modeled by Equation (43). But due to errors in gyroscopes angular rates, the predicted changes in magnetic field will be different from the measured ones given by: where and is the time period between two consecutive epochs. Expanding Equation (44) and substituting from Equation (43) results in: (45) The first two terms give the rate of change of the reference magnetic field, which, during QSF periods, is zero. is the error in caused by the gyroscope biases. Thus Equation (45) reduces to: (46) which can be rewritten as: giving the following measurement model: (48) Full QSF Measurement Error Model Combining Equations (40) and (48), the complete measurement error model using QSF is: , which can be utilized for constraining the error growth in attitude angles and estimating the rate gyroscope errors. Statistical Analysis of the QSF Detector In order to quantify the performance of the proposed quasi-static magnetic field detector, statistical analysis was conducted. From Equation (32), it can be observed that there are a number of tuning parameters that need to be evaluated for effectively using the QSF detector. These are the threshold, noise variance and the number of samples (window size) required for the detection test statistics. Measurement Noise Variance This factor corresponds to the variance of the total field gradient when the field itself is not changing. This gives a measure of the gradient noise that is encountered during quasi-static field periods. More than seven hours of data in five different environments that are common for pedestrian navigation have been collected and analyzed for evaluating this parameter [16,27]. The magnetic field survey was conducted using a tri-axis Bartington high resolution and high sensitivity fluxgate magnetometer. The magnetically derived heading was compared with the true heading estimated by post-processing synchronous measurements collected with the tactical grade inertial system (INS): the SPAN-CPT HG1700 from NovAtel. As shown in Figure 3, all devices were rigidly mounted on a plastic cart. The complete hardware setup was calibrated by performing 3D rotational maneuvers in a perturbation free environment and applying the complete calibration algorithm detailed in [26] to the recorded data. Figure 4 depicts the derived field gradient noise distribution, which is used for estimating the noise variance at 1σ. This comes out to be 0.057 μT 2 . Selection of Threshold and Window Size Using Receiver Operating Characteristics The Receiver Operating Characteristics (ROC) curve allows one to select the test statistics acceptance threshold based on the required probability of detection P d and the acceptable probability of false alarm P f . Figure 5 shows the ROC for the QSF detector for different sample window sizes. The sensor sampling rate is 0.04 s, which gives a minimum window size of 0.12 s and a maximum of 0.32 s in this case. It can be observed that the ROC tends to flatten out after P d = 0.8. Thus selecting a P d any larger than this value will cause more false alarms. Hence a P d of approximately 0.8 is selected for this detector. The effect of the window size on the detector's performance is negligible at the selected P d . Therefore a window size of three samples is selected to reduce the processing burden. Table 1 summarizes the parameters selected for the QSF detector. Figure 6 summarizes the QSF detection periods and their respective durations for different pedestrian navigation environments surveyed. Most of the detection periods have a duration of 120 ms to 300 ms. Figure 7 depicts the shortest and longest gaps between two consecutive QSF periods and their percentages of occurrence respectively. The minimum gap, i.e., 240 ms, occurs more frequently as compared with the maximum gap of 480 ms. Therefore it can be concluded that the QSF periods are encountered frequently and hence may allow for estimation of angular rate errors. It is worth mentioning here that unlike some pedestrian navigation applications where Zero Velocity Updates γ (ZUPT) occur frequently during a pedestrian's walk (e.g., shoe mounted sensors), when the sensor block is in the hand or in a pocket/purse, these may not be encountered at all. In such scenarios, QSF periods can still be used effectively for providing regular measurements for sensor error estimation. Experimental Assessment of the Proposed Attitude Computer In order to assess the performance of the proposed QSF based orientation/attitude estimation algorithm, test data was collected in a real world environment, which is most common for pedestrian navigation applications: downtown. The test setup used for analyzing the impact of the proposed algorithm on attitude estimation comprised a Multiple Sensor Platform (MSP) and an optical wheel encoder developed by the author [12]. The MSP comprises a tri-axis gyroscope made of a dual-axis ST Microelectronics' LPR530AL and a single-axis LY530ALH from the same manufacturer. It includes a HMC5843 tri-axis Anisotropic Magneto-Resistive (AMR) sensor for magnetic field measurements. Finally a tri-axis Analog Devices' ADXL335 accelerometer completes the inertial measurement unit. The wheel encoder is used here for measuring the pedestrian's walking speed so as to bring the outcome of the proposed algorithm from attitude domain to the position domain and provide better insight into the performance of the system. The wheel encoder is capable of computing the pedestrian's walking speed with an accuracy of ±4 × 10 −3 m/s. This walking speed is later resolved into North and East components using the estimated attitude to compute the position, the latter being obtained by integrating the velocity components. As this article focuses only on attitude estimation, the wheel encoder provides accurate speed measurements, which are necessary for de-correlating the velocity error budget from the attitude one, allowing the assessment of attitude accuracies only. In an actual portable device such as a smart-phone, the walking speed would be measured by accelerometers. Although smart-phones of today are equipped with accelerometers, performing gait analysis with a handheld device is a challenging task and constitutes a research topic in itself. Thus with the use of wheel encoder, the experimental assessments of the proposed attitude estimator in the position domain as described herein, can be considered free of errors induced by speed sensors (accelerometers), providing a better insight into attitude accuracy. However the MSP developed for this research is hosting a tri-axis of accelerometers, which can be used in future for investigating different methods to estimate stride length and speed. This work targets the implementation of a complete pedestrian navigation system. The MSP was rigidly mounted on a plastic plate, which can be easily carried in a hand. The wheel encoder is mounted on a pole that can be held by a pedestrian and pushed along the ground for measuring the walking speed. Figure 8(a) shows the handheld arrangement of the sensor module used for the test data collection. Here the pedestrian and body frames are also identified to clarify in which frame (body frame) the attitude is estimated. Figure 8(b) shows the overall test data collection setup including the wheel encoder. It is assumed that the body frames x-z plane is aligned with that of the pedestrian frame in order to account for the ambiguity between sensor frame's orientation and users walking direction. This is achieved with the help of the hand held plastic plate. Thus the sensor's frame is effectively aligned with the body frame. Assessment Criterion In order to assess the impact of the proposed algorithm on attitude estimation for pedestrian navigation, the solution repeatability criterion is chosen. Multiple paths of the same trajectory were followed in different environments keeping the same starting and ending point to assess the performance. The paths were followed so as to keep the separation between them within one metre if possible. This is achieved by following prominent patterns on the ground (tiles boundaries, pavement markings/intersections etc.). Test Environment In order to assess the impact of the proposed algorithms on attitude estimation for pedestrian applications, a urban canyon was selected. Urban canyons can be considered as one of the regions where pedestrian navigation applications have a lot of commercial significance. Also before moving indoor, one often ends up being in an urban canyon for some time. Hence detailed analysis of the proposed algorithm in this environment is very important. Figure 9 gives the bird's eye view of the test region selected in downtown Calgary for the assessment. The block selected is newly constructed with a walkway filled with ferrous infrastructure all around including phone booths, newspaper dispensers, street light poles and manholes. The walking trajectory around this block was approximately 370 m and was traversed thrice for repeatability testing. Figure 10(b) shows one of the paths traversed in the selected region with high rise buildings and metallic infrastructure all around. Indeed this environment includes numerous magnetic field perturbation sources and hence can be considered a good test area for assessing the attitude estimation algorithms presented in this article. Figure 11 shows the total field observed in the urban canyon environment along with the QSF detections. It can be observed that the overall signatures of the total field as well as the detection of QSF periods are temporally very similar for different paths. This is because the paths traversed were kept within 1 m of one another for assessing the repeatability of the results. Figure 11. Total field and QSF detections for similar paths in urban canyon. Figure 12 shows the trajectories obtained using raw heading versus QSF measurements for estimating gyroscope errors. It can be observed that due to severe magnetic field perturbations, the trajectory obtained using raw heading estimates has a maximum error of 43 m as compared to 5 m in case of QSF measurements. The observability of gyroscope biases using QSF measurements is presented graphically in Figures 13 and 14. Using Equation (43), one can obtain the estimates of the rate of change of magnetic field components using gyroscopes during a QSF period. Due to presence of biases in the gyroscope measurements, these estimates are different from the actual rate of change of magnetic field as shown in Figure 13. Upon utilizing Equation (44) as measurement errors for the EKF, the gyroscope biases are successfully estimated and compensated from the gyroscope measurements bringing the estimated rate of change of magnetic field components in harmony with the actual ones, as shown in Figure 14. Figure 15 shows the three trajectories obtained using QSF measurements in the urban canyon. These trajectories are obtained by initializing the starting position and orientation for each loop. The first observation is the consistency of the ending locations. These are within 2 m of one another showing the effectiveness of QSF in estimating the rate gyroscope errors. The other observation is the random skewing of the three trajectories with respect to one another. This is because the QSF measurements can completely observe the rate gyroscope errors, but are not capable of observing the actual attitude errors. The attitude error growth is constrained using QSF measurements as is evident from Equation (49). As the rate gyroscope errors are randomly varying, these cause random errors in the attitude at the beginning of each path while the gyroscope errors are being estimated, which results in a random orientation error. Once the rate gyroscope errors are completely estimated, the attitude error growth is constrained. Thus the accuracy of the estimated trajectory is improved through the use of the QSF detections. Reference Traj. Figure 16 shows the trajectory obtained using the three loops in a continuous fashion. It is quite evident that Loop2 and Loop3 are very similar in this case. This is because the rate gyroscope errors have been properly estimated resulting in trajectories with a steady skew, which means that the orientation errors are now effectively constrained. Figure 16. Continuous trajectory in urban canyon using QSF. Performance of Attitude Estimator in Urban Environment The maximum trajectory error obtained using raw magnetic heading based on measurement models for estimating continuous trajectory, whose length was longer than 1 km (for the three loops) is approximately 87 m whereas that for the QSF measurement model is approximately 16 m. Thus the trajectory errors are reduced by 80% by utilizing QSF measurements for constraining the attitude error growth and estimating the gyroscopes' errors. Conclusions and Future Work This article investigated the use of handheld devices (smart-phones) equipped with low cost consumer grade sensors for pedestrian navigation. As the attitude/orientation errors play a major role in the overall navigation error budget, the focus was on improving the attitude estimates in environments where GPS is denied. For this purpose, the use of the Earth's magnetic field as a measurement source for estimating the errors associated with low cost inertial sensors was investigated. A novel method for utilizing the Quasi-Static magnetic Field (QSF), regardless of perturbation, to mitigate gyroscope errors has been developed and the corresponding equations have been presented. The novel estimation technique proved to deliver a high level of performance, reducing the trajectory errors by 80% for a distance of more than 1 km. This method detected the QSF periods during pedestrian's motion and related the changes in the magnetic field components during these periods with the angular rates of the sensor block, thus providing measurements for directly assessing the errors associated with the rate gyroscopes. Development of an Extended Kalman Filter (EKF) for modeling the attitude and gyroscope errors as well as relating these to the magnetic field measurements have been conducted. They gave an insight into the interdependence of different parameters, which proved beneficial in identifying the limitations of the proposed model. Selection of an urban canyon (magnetically disturbed outdoor field) for the experimental assessments provided data sets for a realistic and detailed analysis of the performance of the proposed algorithm. Analyzing the results in the position domain with the sensor platform carried in a hand gave detailed insight into the impact of the attitude estimator on the position error budget. A high accuracy wheel encoder made it possible to isolate the attitude errors from the position ones. Even with an 80% error reduction in the positioning domain as compared with classical integration of magnetometers and gyroscopes in attitude estimation filter, the use of QSF measurements to successfully estimate the gyroscope errors resulted in constant orientation errors. This showed that this scheme is not sufficient for observing the absolute attitude errors. The position errors reached 16 m for a traversed trajectory of more than 1 km. The use of accelerometers and pressure sensors for estimating pedestrian's position as well as speed is necessary to completely assess the impact of this research in real world navigation scenarios. Both of these sensors are already incorporated in the MSP developed for this research and will form the basis for the above suggested research. Research into the resolution of the ambiguity between sensor and body frames, which was constrained to be zero for the experiment herein, is needed. This can be achieved by using the accelerometers, but requires detailed modeling of the pedestrian's walk related to arm swing or hip joint motion. The algorithm developed herein is self-contained and assumed a fully denied GNSS environment. However, GNSS is partly available in urban canyons and in numerous indoor environments. Hence research into the integration of the two approaches to maximize availability and accuracy is in order.
8,698.2
2011-11-30T00:00:00.000
[ "Engineering", "Environmental Science" ]
The Prevalence of Concomitant Abdominal Aortic Aneurysm and Cancer Cancers and abdominal aortic aneurysms (AAA) cause substantial morbidity and mortality and commonly develop in old age. It has been previously reported that AAA patients have a high prevalence of cancers, which has raised the question of whether this is a simple collision, association or causation. Clinical trials or observational studies with sufficient power to prove this association between them were limited because of the relatively low frequency and slow disease process of both diseases. We aimed to determine whether there is a significant association between AAA and cancers using nationwide data. The patients aged > 50 years and diagnosed with AAA between 2002 and 2015, patients with heart failure (HF) and controls without an AAA or HF matched by age, sex and cardiovascular risk factors were enrolled from the national sample cohort from the National Health Insurance claims database of South Korea. The primary outcome was the prevalence rate of cancers in the participants with and without an AAA. The secondary outcome was cancer-related survival and cancer risk. Overall, 823 AAA patients (mean (standard deviation) age, 71.8 (9.4) years; 552 (67.1%) men) and matching 823 HF patients and 823 controls were identified. The prevalence of cancers was 45.2% (372/823), 41.7% (343/823) and 35.7% (294/823) in the AAA, HF and control groups, respectively; it was significantly higher in the AAA group than in the control group (p < 0.001). The risk of developing cancer was higher in the AAA patients than in the controls (adjusted odds ratio (OR), 1.52 (95% confidence interval [CI], 1.24–1.86), p < 0.001) and in the HF patients (adjusted OR, 1.37 (1.24–1.86), p = 0.006). The cancer-related death rate was 2.64 times higher (95% CI, 2.22–3.13; p < 0.001) for the AAA patients and 1.63 times higher (95% CI, 1.37–1.92; p < 0.001) for the HF patients than for the controls. The most common causes of death in the AAA patients were cancer and cardiovascular disease. There was a significantly increased risk of cancer in the AAA than in the HF and control groups. Therefore, appropriate screening algorithms might be necessary for earlier detection of both diseases to improve long-term survival. Introduction An abdominal aortic aneurysm (AAA) is characterized by a chronic inflammatory component with a degenerative component. Its origin is a multifactorial disease related to both genetic and environmental risk factors [1]. Smoking is considered one of the most important risk factors for AAA [2]. Age is another important risk factor. While the prevalence of AAA is negligible before the age of 65 years, it increases steadily with age thereafter, estimated to range between 1% and 2% in men and around 0.5% in women aged ≥ 65 years [3]. AAAs cause 1.3% of all deaths among men aged 65-85 years in developed countries [3]. 2 of 13 In an aging population, multimorbidity is frequent. Various diseases show an agerelated prevalence, and some morbidities often coincide due to shared risk factors [4]. Cancer is not uncommon in patients with an AAA; as a result, concomitant cancer and AAA has been a long-standing subject of study in deciding the treatment priority and method [5,6]. It was reported that the annual incidence of newly diagnosed AAA in patients with cancer was 100 times higher than that in a similar age group of men in the general population, which was 0.4 to 0.67% [7]. The true incidence is difficult to determine accurately, but it has been reported to be between 0.49% and 38.1% [6,8]. Commonly reported types of associated cancers include lung, colorectal and prostate cancers [6,7,9]. Concurrence may be attributable to similar patient demographics such as advanced age and male sex and common risk factors such as smoking [10]. There may also be a common pathway, as suggested in other diseases [11][12][13]. Traditionally, AAA is regarded as a consequence of atherosclerosis owing to its association with the atherosclerotic change of the aortic wall [14]. However, recent studies have demonstrated that medial and adventitial injuries from proteolysis, oxidative stress and adaptive immune responses, rather than atherosclerotic change, are involved [15][16][17]. Oxidative stress and its direct consequences, including lipid peroxidation, play a role in the malignancy and were suggested as a linkage between cancers and AAAs [18]. There have been sporadic reports on the coexistence of the two diseases so far, but there have been no large-scale studies involving a sufficient number of subjects to date on the exact prevalence of the two diseases. In this study, we aimed to investigate the prevalence of concomitant AAA and cancer in a national sample cohort from the South Korean population. In addition, the mortality rate related to AAA and the cause of death were analyzed. Data Sources This study was undertaken using the national sample cohort (NSC) database accumulated from January 2002 to December 2017 which was acquired from the National Health Insurance Service (NHIS). The NHIS-NSC database is a population-based cohort established by the NHIS. This cohort includes approximately 2.2% of the entire eligible population randomly sampled from the 2002 National Health Insurance Recipient Qualifications Database and followed up until 2015. Each patient's demographic information, International Classification of Disease, Tenth Revision (ICD-10) diagnosis codes, procedure codes and survival in inpatient and outpatient services were collected and analyzed. Approval for data collection and publication was granted by the institutional review board (IRB No. 2020-0643) of our hospital, which waived the requirement for written informed consent because of the retrospective nature of the study and the lack of information on the participant's identification. All the methods were performed in accordance with the relevant guidelines and regulations. Study Design and Cohort Definition First, the patients diagnosed with AAA (ICD-10 codes I71.3-4 and I71.8-71.9) between January 2002 and December 2015 were screened. The following patients were then excluded: those with ICD-10 code I71.9, diagnosed with AAA before June 2002 or after June 2015 and/or aged younger than 50 years. Finally, patients with the following conditions were excluded: (1) AAA related to Behcet's disease (ICD-10 code M35.2) or syphilis (ICD-10 code A53.9), (2) history of typhoid fever (ICD-10 code A01) and/or (3) age of AAA diagnosis younger than 50 years. The two control groups without AAA were randomly sampled from individuals who had not been diagnosed with AAA during the same period, one with heart failure (HF (ICD-10 codes I50.0-9)) and the other without HF after matching for age, sex and other cardiovascular risk factors such as hypertension (HTN), diabetes (DM) and dyslipidemia (1:1:1 matching). The HF group was used as the control group for comparison because the association of HF with cancer is relatively well-known. The sex was matched completely, the age difference was less than 6 years and HTN, DM and dyslipidemia were matched where applicable. The index date of each group was defined as the first diagnosis date of AAA in the AAA group or of HF in the HF group. In the control group, the index date was 1 June of the corresponding year of the matched AAA patients. Study Outcomes The primary outcome was the development of any cancer. Patients with cancer were defined as those who had one or more hospitalizations, visited outpatient clinics at least twice and/or had the Special Support for Serious Illness Act code with a principal or subdiagnosis code of cancer (ICD-10 codes C00-97). The Special Support for Serious Illness Act code for cancer is given only when histologically proven. The cancer diagnosis time was classified as follows based on the index date: at least 6 months before the index date, within 6 months before and after the index date, and 6 months after the index date. Cancer that occurred at least 6 months after the index date was defined as a subsequent cancer. Initially, cancer risk was analyzed by including all cancers regardless of the period of occurrence. The hazard ratio was then calculated by analyzing patients with newly developed cancer among those who did not develop cancer until 6 months from the index date. The secondary outcome was the cause of death and the mortality rate per 100 person-years. Statistical Analysis Demographic data were compared using a generalized estimating equation method with appropriate distributions for paired data. The association with cancer was analyzed by conditional logistic regression and the results were represented as odds ratios (ORs) and 95% confidence intervals (CIs). The incidence of cancer or death after enrollment was evaluated with the incidence rate per 100 person-years and the Kaplan-Meier curve analysis. A Cox proportional hazards regression model with a robust variance estimate considering paired data was used to determine the adjusted hazard ratio (HR) and corresponding 95% CIs for the association between AAAs and cancers. The cumulative survival probability was presented graphically with a Kaplan-Meier curve. All p-values < 0.05 were considered significant. The statistical analysis was performed using SAS Enterprise Guide version 7.1 (SAS Institute Inc., Cary, NC, USA) and R version 3.0.3 (R Development Core Team, 2006). Verification of ICD-10 Codes First, to accurately identify patients with an AAA, the ICD-10 code corresponding to AAA was verified using hospital data. The image sets and medical records of patients from a single university hospital database from 1 January 2018 to 31 December 2018 searched by AAA codes and codes of the diseases similar to AAA were reviewed. The AAA codes were I71.3-4 and I71.8. Codes of the diseases similar to AAA included thoracoabdominal aortic aneurysm (I71.6), dissection of aorta (I71.0), thoracoabdominal aortic aneurysm, ruptured (I71.5), aortic aneurysm of unspecified site, without rupture (I71.9), aneurysm and dissection of iliac artery (I723), aneurysm and dissection of the artery of lower extremity (I72.4) and aneurysm of the aorta in diseases classified elsewhere (I79.0). The computed tomography images and medical records were reviewed by one vascular surgeon. The vascular surgeon independently investigated whether the registered and similar codes matched the diagnosis of AAA and calculated the specificity and sensitivity. Upon reviewing the hospital data of 2352 patients, it was decided that the I71.9 code would not be included in this study since the misclassification rate was high: 35 patients out of 52 patients (67.3%) were not AAA patients. The sensitivity and specificity of I71.3-4 and I71.8 were 99.7% (642/644) and 99.6% (1650/1656), respectively. Characteristics of the Population Among the 1,108,369 subjects of the NHIS-NSC database, we identified 899 patients with AAA, 76 of whom were excluded from the analysis according to the aforementioned criteria ( Figure 1). The AAA group's mean age was 71.8 years (standard deviation (SD), 9.4 years), and the group was comprised of 32.9% (271) women and 67.1% (552) men. Among the remaining 1,106,687 participants without AAA, through 1:1:1 matching, we included 823 patients to the HF cohort and 823 patients to the control cohort. The baseline characteristics of the three groups are summarized in Table 1. The smoking and alcohol history data of 37.9% of the AAA/HF/control cohort were missing. out of 52 patients (67.3%) were not AAA patients. The sensitivity and specificity of I71.3-4 and I71.8 were 99.7% (642/644) and 99.6% (1650/1656), respectively. Characteristics of the Population Among the 1,108,369 subjects of the NHIS-NSC database, we identified 899 patients with AAA, 76 of whom were excluded from the analysis according to the aforementioned criteria ( Figure 1). The AAA group's mean age was 71.8 years (standard deviation (SD), 9.4 years), and the group was comprised of 32.9% (271) women and 67.1% (552) men. Among the remaining 1,106,687 participants without AAA, through 1:1:1 matching, we included 823 patients to the HF cohort and 823 patients to the control cohort. The baseline characteristics of the three groups are summarized in Table 1. The smoking and alcohol history data of 37.9% of the AAA/HF/control cohort were missing. Prevalence and Incidence of Cancers A total of 372 (45.2%) AAA, 343 (41.37%) HF, and 294 (35.7%) control patients were diagnosed with malignant cancer (p < 0.001). The period when the patient was diagnosed with cancer in relation to the index date and the type of cancer is described in Table 2. In the AAA group, the time of cancer diagnosis was as follows: more than 6 months prior to the index date in 177 (47.6%) patients, within 6 months before and after the index date in 103 (27.7%) patients and more than 6 months after in 92 (24.7%) patients. In terms of breast and genitourinary cancer, cancer types were masked and indistinguishable in the main diagnostic codes of the NHIS-NSC database due to its privacy policy. Among the identifiable types of cancer, the most common type in the AAA group was colorectal cancer (31.2%, 116/372), followed by lung cancer (15.3%, 57/372). The most common cancer in the HF group and the control group was lung cancer, accounting for 19.5% (67/343) and 11.6% (34/294) of the cases, respectively. Comparison of the Risk of Cancer between the Groups Cancer risk was analyzed by the cancer prevalence rate in each group. The risk of cancer was 1.50 times higher (95% CI, 1.23-1.83) in the AAA group and 1.29 times higher (95% CI, 1.06-1.58) in the HF group than in the control group (p < 0.001) (Figure 2). The risk of cancer was significantly higher in the AAA group regardless of the adjustment. When adjusted by HTN, DM, dyslipidemia and smoking, the risk of cancer was 1.47 times higher (95% CI, 1.11-1.96) in the AAA group and 1.45 times higher (95% CI, 1.02-2.07) in the HF group than in the control group (p = 0.007 and p = 0.038, respectively). When comparing the AAA and HF groups, the difference between both groups was not statistically significant (crude OR, 1.16 (95% CI, 0.95-1.41), p = 0.144; adjusted OR, 1.02 (95% CI, 0.73-1.44), p = 0.895). Cancer Risk by Age Groups The age-specific cancer risk was analyzed (Table 3). In the patients with AAA who were younger than 65 years old, the cancer risk was 2.01 times that in the control group (95% CI, 1.27-3.18, p = 0.003). Similarly, the HF patients younger than 65 years old showed a 2.20 times higher risk than the control group (95% CI, 1.40-3.47, p = 0.001). In the patients 65 years of age or older, the patients with AAA had a significantly higher risk than the control group, while the HF and control groups had a similar risk (p = 0.004 and p = 0.300, respectively). Cancer Risk by Age Groups The age-specific cancer risk was analyzed (Table 3). In the patients with AAA who were younger than 65 years old, the cancer risk was 2.01 times that in the control group (95% CI, 1.27-3.18, p = 0.003). Similarly, the HF patients younger than 65 years old showed a 2.20 times higher risk than the control group (95% CI, 1.40-3.47, p = 0.001). In the patients 65 years of age or older, the patients with AAA had a significantly higher risk than the control group, while the HF and control groups had a similar risk (p = 0.004 and p = 0.300, respectively). Risk of Subsequent Cancer The risk of cancer in patients without a history of cancer or without cancer until after 6 months after the index date is described in Table 4. The cancer development rate per 100 person-years was 6.87 in the AAA group, 4.88 in the HF group and 3.89 in the control group. The unadjusted HRs were 1.72 (95% CI, 1.35-2.21) for the AAA group (p < 0.001) and 1.28 (95% CI, 1.02-1.60) for the HF group (p = 0.035) in comparison to that for the control group. The HRs adjusted by age, sex, DM, hypertension and dyslipidemia were 0.71 (95% CI, 0.56-0.88) for the AAA group (p = 0.002) and 0.56 (95% CI, 0.43-0.72) for the HF group (p < 0.001) in comparison to that for the control group. In the patients without cancer until after 6 months after the index date, the cancer risk was not significantly different between the groups (p = 0.199) (Figure 3). Comparison of the Mortality Rate The mortality rate of each group is summarized in Table 5. The mortality rate per 100 person-years was 11.11 in the AAA group, 6.80 in the HF group and 4.40 in the controls. The patients with AAA had a 2.82 times higher mortality risk than the controls (p < 0.001). When compared to the HF group, the AAA patients had a higher mortality rate (p < 0.001). In the AAA group, the 5-year survival rate was 57.7% (95% CI, 53.7-61.6). The 5-year survival rate of the HF and control groups were 75.17% (95% CI, 72.2-78.1) and 80.87% (95% CI, 77.7-84.0), respectively. a Adjusted by age, sex, DM, HTN and dyslipidemia; b adjusted by age, sex, DM, HTN, dyslipidemia and smoking; c adjusted by age, sex, DM, HTN, dyslipidemia, smoking and alcohol. Abbreviations used: AAA, abdominal aortic aneurysm; HF, heart failure; HR, hazard ratio; CI, confidence interval. Comparison of the Mortality Rate The mortality rate of each group is summarized in Table 5. The mortality rate per 100 person-years was 11.11 in the AAA group, 6.80 in the HF group and 4.40 in the controls. The patients with AAA had a 2.82 times higher mortality risk than the controls (p < 0.001). When compared to the HF group, the AAA patients had a higher mortality rate (p < 0.001). In the AAA group, the 5-year survival rate was 57.7% (95% CI, 53.7-61.6). The 5-year survival rate of the HF and control groups were 75.17% (95% CI, 72.2-78.1) and 80.87% (95% CI, 77.7-84.0), respectively. Causes of Death The causes of death are summarized in Table 6. The most common causes of death in the AAA group were cancer and ruptured AAA, followed by cardiovascular and cerebrovascular diseases. In the HF and control groups, the most common causes of death were cancer and cardiovascular and cerebrovascular diseases. The most common type of cancer related to death was lung cancer. Prostate cancer was not identifiable with the diagnosis code in the database, but it was listed in the cause of death database. It was the second most common cancer as a cause of death in the AAA and HF groups and the fourth in the controls. Cardiovascular death with AAA rupture-related death was detected in 64 (7.8%) patients in the AAA group, which was the same frequency as that of cancer-related death. Discussion The association between AAAs and cancers has been suggested in previous studies, but it was difficult to clarify because it was difficult to obtain a number of patients large enough to have sufficient power. This study aimed at evaluating the possible association between AAAs and cancers. In this study, we analyzed each of the 823 AAA patients, HF patients and controls. To the best of our knowledge, this study is one of the first publications with a large validated sample cohort of patients with AAA demonstrating the risk of cancer through a comparison with patients with HF, which is well-known for having a high cancer risk, as well as a control group without AAA. The NHIS-NSC database is a representative database constructed by systematic stratified random sampling with a proportional allocation within each stratum according to the individual's total annual medical expenses [19]. Notably, the ICD-10 code for cancer is validated in its accuracy by comparison with the Korea National Cancer Incidence Database built by the Korea Central Registry, which only registers histologically proven malignancies [20]. Moreover, to maximize the accuracy of selection of patients with cancer, we defined these patients as those who visited outpatient clinics at least twice a year with the same code. The reason to select HF as one of the control groups was that there had been many previous reports suggesting an association between cancer and patients with HF [12,13,21,22]. In our study, the risk of cancer in the patients with AAA was the highest among the three groups (45.2% for the AAA group vs 41.7% for the HF group vs. 35.7% for the controls, Table 2). The AAA group especially showed a significantly higher cancer risk, which was 1.52 times higher than that in the control group. Compared to the HF group, the risk was similar (crude OR, 1.11, p = 0.340, Figure 2). Therefore, the findings from this study suggest that the two diseases coexist not only occasionally or are caused by the old age of patients, but are also strongly associated. Cardiovascular diseases are a significant cause of death for many cancer survivors and rivals cancer recurrence [23]. Conversely, cardiovascular diseases are associated with a higher incidence of cancer [24,25]. Previous registry-based cohorts of patients with myocardial infarction showed a modest 5-8% increased risk of cancer [26][27][28]. In addition, the incidence of cancer was increased among patients with preexisting HF, with the estimated incidence in the range of 18.9-33.7 per 1000 person-years [24,[29][30][31]. Similarly to other cardiovascular diseases, AAA often coexists with cancer. In accordance with previous reports, our data revealed a high prevalence of cancer in patients with AAA, accounting for 45.2% of these patients (372/823). The suggested mechanisms of coexistence of AAA and malignancies include common risk factors, proinflammatory conditions and oxidative stress. AAA shares a number of modifiable risk factors with cancer, such as smoking and increased age [4,8,32]. It has also been suggested that it might be due to the presence of chronic inflammatory cells and cytokines and the angiogenesis status in patients with AAA [32]. Another possible mechanism is the disturbed interaction between matrix proteins and epithelial cells, which facilitates angiogenesis or carcinoma and the development of an aneurysm-prone phenotype [33,34]. Surveillance during the active follow-up of AAA or cancer may also result in a growing prevalence of coexistence [24,35]. Similar findings were observed in the experimental study, and systemic pathological processes, such as inflammation and oxidative stress, are among the main hypotheses, possibly superimposed on the background of genetic predisposition [36]. Circulating neurohormonal factors were also shown to affect tumor biology [21]. When stratified by age group in our study, the increased cancer risk was noticeable in patients under the age of 65 years in both the AAA and HF groups (Table 3). In the AAA group, an increase in cancer risk was observed regardless of age but was more pronounced in patients under 65 years of age. Interestingly, in the HF group, an increase in cancer risk was no longer observed in patients over 65 years of age. Regarding the temporal sequence of AAA and cancer, the cancer diagnosis appeared earlier than the AAA diagnosis (75.3% (280/372)). When comparing each group, the AAA group had a higher cancer risk before the index date whereas the HF group had an increased subsequent cancer risk (Table 2). However, since both diseases progress slowly and the order of discovery does not indicate the order of onset, it is difficult to conclude which disease precedes the other. In our study, the mortality rate was 2.64 times higher in the AAA group than in the control group (p < 0.001, Table 5). However, when adjusted by age, sex and underlying condition, the mortality risk was lower than in the control group (adjusted HR, 0.51 (95% CI, 0.41-0.65)). These findings suggested that controlling associated risk factors along with the management of AAA may contribute to an increased survival rate. In our study, the most common cause of death in the patients with AAA was cancer, followed by ruptured AAA. As indicated in our study, cancer was associated with a significantly high prevalence and was a major cause of death in the AAA group. A substantial mortality risk was also associated with ruptured AAA, although it was shown in previous reports to decline gradually over time [37,38] Therefore, it seems necessary to run efficient and cost-effective screening programs for these diseases in those with the highest risk. This study had several limitations. Firstly, other than the high rate of coexistence of AAA and cancer, the association between the two diseases including common risk factors and shared causation could not be revealed due to the retrospective observational nature of this study. Secondly, the national database may have included some misclassification of the diagnosis. To get the most accurate target patients possible, however, we pre-evaluated hospital data and included only accurate diagnostic codes to our analysis. In addition, we excluded the AAAs related to distinct mechanisms such as Behcet's disease from the study. Thirdly, this claim data offered limited clinical data, including smoking history, because such information was included only for the patients who had undergone a national health check-up. As a result, it was not possible to hypothesize some potential mechanisms of the association between AAAs and cancers. We attempted to analyze only patients with such information, but it seemed more likely that the data loss was too large to represent the population. Instead, we analyzed the risk ratio by adjusting these risk factors for people with such information, which also showed a significant difference. Fourthly, we tried to eliminate infective AAA, but diseases related to typhoid fever and syphilis were masked from the database via categorization into sensitive diagnoses with regard to the patients' privacy. As the number of patients with Behcet's disease was as small as 11, we assumed it to be a negligible number. Finally, there was no information whether the control group had no undiagnosed cancer or AAA, and there was a possibility that the higher rate of coexistence might have resulted from increased detection of other diseases during workup for one disease. The finding that the cancer detection rate was high within 6 months of the AAA diagnosis supports this possibility. However, we believe that this did not solely come from the imaging workup because the cancer detection rate 6 months after the index date in the control group was similar to that in the AAA group. In addition, subsequent cancer incidence per 100 person-years and cancer-related mortality was higher in the AAA group than in the control group. Despite these limitations, the current study had its strengths. First, it is the largest study to date that has investigated the association in patients with an AAA and cancer, considering that it has been relatively infrequently reported that both AAA and cancer coexist in one patient. It seems important to elucidate the prevalence of coexistence to suggest simultaneous surveillance for the improvement of long-term survival or elucidate a possible common pathogenesis or risk factors. Moreover, we matched the patients with AAA to those with HF, whose association with cancer has already been suggested, as well as with a control group, supporting our findings. We believe our study can stimulate further research to delineate a connection between AAAs and cancers, shed light on possible mechanisms and risk factors and eventually develop cost-effective and practical surveillance protocols. Conclusions There was a significantly increased risk and higher prevalence of cancer in the AAA group than in the control group in this 13-year cohort. Further research is needed to elucidate the possible shared pathologic process. In the meantime, effective screening programs for both diseases need to be developed. Institutional Review Board Statement: The study was conducted in accordance with the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of the Asan Medical Center (protocol code 2020-0643, date of approval: 28 April 2020). Informed Consent Statement: The requirement for patient consent was waived due to the retrospective nature of the study and the lack of information to identify the participants.
6,374.2
2021-08-27T00:00:00.000
[ "Medicine", "Biology" ]
A Comparison of Efficiency in Teacher Correction Strategies in Iranian EFL Learners ’ Speaking Improvement The present study was an attempt to investigate the effect two types of corrective feedback (i.e., recast and metalinguistic) in order to find out which one is more effective on EFL learners’ speaking improvement and also to see if gender could play a role in the relative impact of the two types of corrective feedback on learners’ speaking ability. To this end, 65 EFL learners of intermediate level in one of language institutes in Shiraz, Iran were selected and divided into three groups including two experimental groups and one control. The instruments used to collect the data included IELTS test as the pre and post tests and Oxford Placement Test (OPT) in order to obtain the homogeneity in participants’ English proficiency. The collected data were codified and entered into SPSS Software (Version 22) and were analysed using descriptive statistics, t-test, and Tukey test. The results indicated that although applying these two types of corrective feedback could have made improvement in EFL learners’ speaking ability, there was not observed any significant difference between impacts of recast and metalinguistic on EFL learners’ production. The test results also indicated that there was not any significant difference regarding gender within the three groups. This homogeneity further shows that in this study, the gender variable did not have any effect on the role of corrective feedback and it can be concluded that the observed difference between metalinguistic group, recast group, and control group is just the result of the provided corrective feedback type which has acted as the intervening variable and the moderator variable such as gender did not prove to have any effect in the outcome of this study. The findings can contribute to syllabus design and teaching methodology areas. Introduction Undoubtedly, learners of second language commit some errors and by correcting these errors in some special ways, the process of learning another language is going to be complete.When learners produce incorrect utterances, it is important that teachers know how to correct them.Therefore, the way through which learners' errors are corrected is quite important in process of teaching and learning English as Second Language (ESL) or English as a Foreign Language (EFL).In fact, without correcting errors, the process of learning and teaching another language would remain incomplete and errors would be fossilized in the learners' mind. An important issue that has attracted much attention recently is how the learners' errors could be corrected and which techniques should be applied for correcting EFL and ESL learners' errors.Numerous studies have been done over this issue and regarding learners' age, proficiency, nationality, preferred and the other factors; the results of such pieces of research are different but there appears to be a common agreement among researchers that corrective feedback is a vital factor in every language learning context and all such studies have tried to show the importance of corrective feedback.In Some traditional approaches, pronunciation as an aspect of speaking ability was not accounted an effective factor (e.g., Purcell & Suter, 1980) but in recent methodologies, pronunciation has been considered an essential component of a language (Pennington, 1994).In addition, Ferris (1999) "suggested that attention be given to investigating which methods, techniques, or approaches to error correction lead to short-term or long-term improvement and whether students make better development in observing certain types of errors than others".Lyster (2007) classified teacher corrective feedback into two broad categories: reformulations and prompts.In reformulations, errors that are made by learners would be rephrased in correct forms.However, in prompts, teachers as directly or indirectly push and ask learners to correct themselves and produce the correct form.Ellis (2009) introduces these two types of feedback as input providing and output prompting strategies. Since the aim of learning a language is significantly done to enable learners to produce correct output, corrective feedback has got a special position in teaching and learning English language process specially speaking and pronunciation.Several experts believe that speaking ability is the main yardstick for evaluating learner's success in process of learning other languages; for instance, Marks (2006) points out that pronunciation is a main and basic factor for using language in the communicative competence. When it is claimed that a learner is successful commonly refers to the fact that he/she can speak fluently and is accurate to an acceptable rate.As speaking is considered an output, so it is completely clear that the issue of pronunciation is quite important in acquisition of foreign or second language and vocabulary of the target language must be pronounced correctly."Once, in the era of behaviorism and audiolingualism, it was believed that errors should be eradicated at any level because language learning was considered as a kind of habit formation and by committing errors, one could form some bad habits in his/her behaviour" (Soleimani et al., 2014). Error production can be considered as a successful step towards progress for learners and teachers, if errors are immediately corrected by the teacher.Furthermore, error production can be a danger for speaking and pronunciation if the error would not be corrected spontaneously, it would be fossilized in learner's minds and turns to an incorrect habitual pronunciation for the learner.Here, the importance of teacher's corrective feedback is better comprehended. The type of teacher's corrective feedback through which errors of pronunciation are corrected is important as well."Corrective feedback must be taken into account, since frequent corrections may lower learners' self-confidence and decrease motivation or even result in disappointment, frustration and ultimately, reluctance and rejection" (Murphy, 1991).During a course, teachers don't have enough time to try a variety of teacher corrective feedback types.If they do so, they can't achieve the aim of teaching and gain the appropriate feedback as a result of doing error correction.In the present study, the researcher will investigate two types of corrective feedback for EFL learners' pronunciation.This research is an attempt to investigate the impact of two types of corrective feedback on learners' pronunciation and also finding out if there is any difference between two types of corrective feedback (meta-linguistic and recast) and also discover if participants' gender would be effective on the types of corrective feedback. The importance of this study thus lies in comparing two types of corrective feedback (i.e., meta-linguistic and recast) and investigating the effect of both types on speaking improvement.Accordingly, the more effective corrective feedback type will be recognized.Teachers could apply this to their classes in order to get better results and help learners produce words with correct pronunciation and sound speaking ability. The other important aspect of this study is to search about correcting pronunciation.The Correct pronunciation is an important factor that can help and motivate learners to have communication with native speakers and also comprehend them better.This study by recognizing an effective corrective feedback type could make a salient improvement for EFL learners. Theoretical Background on Corrective Feedback To ensure the effectiveness of corrective feedback; some factors are quite important such as: timing, amount, mode, and audience.Also, clearness of corrective feedback should be as much that learners could comprehend.It means the teacher should be careful during error correction and application of the right kind of corrective feedback.The learners should participate in the process of pronunciation correction and should not be distracted.Miremadi (2004) says that a person's pronunciation when they speak has a significant effect on their identity.When we speak about pronunciation, it does not mean that a learner should have native-like accent, rather it means learner must learn to pronounce correctly and speak in a way that be comprehensible to hearer.Lochtman (2002) in a related study tried to investigate different corrective feedback types as the frequency and distribution considering learners' uptake.Function of her research was based on oral corrective feedback in ESL context.According to her research, the most popular types of teacher corrective feedback were recasts and elicitations types.In addition, the results showed that recasts and explicit types were the least effective types of corrective feedback, and metalinguistic feedback and elicitations were the most effective types considering learners' uptake.Lyster (1997) also presented a study in which they classified various types of corrective feedback used by teachers in response to learner errors.They conducted a study in French immersion classrooms and found that recasts were used more frequently (55%) than other types of feedback such as explicit (7%) correction, metalinguistic feedback (8%), clarification Requests (11%), elicitation and repetition (5%), which was used less frequently.Although recasts have had the high frequency of occurrence, they have the least amount of uptake (31%) while it is showed 80% of learners' uptake belongs to the other feedback types.It was also found that the rate of uptake that is elicited by recasts is lower and only seen in 40% of cases; the students correct their output by recasts.While other types of feedback such as clarification requests, elicitation, repetition, and metalinguistic clues, resulted in successful uptake elicitation over 70% of the times they were used.In support of this result, Ellis et al. (2001) indicated that the high rate of uptake elicitation in recasts is 71.6%.The rate in other types of corrective feedback was even higher, ranging from 80% to 100%. Empirical Background on Corrective Feedback According to the above percentage distribution, recast (55%) is more popular among teachers and most of them are eager to use this corrective feedback type.But recasts have the lowest rate of repair and uptake .A more important to mention is that in recasts explicit correction, peer-and self-repair cannot be applicable although these two are effective techniques to error correction.Opposite to that, elicitation, metalinguistic clues, clarification request, and the repetition of error could be applicable for peer-and self -repair and teachers can elicit the correct form.These types also have higher rates of uptake.These four teacher correction feedback types are also called negotiation of forms that are useful for SLA in two ways: first, they place learners in a situation, where they could use and apply their target language knowledge and in this way the correct form would be focalized in their memory (e.g., Hulstijn, 1990;Nobuyoshi & Ellis, 1993;Towell, Awkins, & Bazergui, 1996).Second, by drawing learners' attention to form during communicative interaction, the learners' attention would be drawn in some ways that enable learners to modify the incorrect output and test new hypotheses about the target language (e.g., Pica, 1988;Swain, 1993Swain, , 1995)). Theoretical arguments have been advanced for teacher correction types.Rahimi & Sobhani (2015) investigated the relationship between types and distribution of corrective feedback and their effects on learners' uptake in Iranian adult EFL classrooms.In the study, it was showed that the more frequent teacher correction type which was received by the learners in all proficiency levels, is the recast type and the elicitation and request for clarification were more useful to learners' uptake.Lyster (2004) in a study claimed that the effectiveness of metalinguistic feedback is more than recasts.Lyster justifies this claim by bringing the reason that the Metalinguistic feedback gives explicit information about the target structure; therefore, in learners view it seemed to be more facilitative.However, this perspective is proven in all studies which have worked on this subject.Kim & Mathes (2001), for example, in their study reported that they never observed any statistically significant differences in the scores of the explicit and implicit groups.Surakka (2007) investigated the frequency and effectiveness of various corrective feedback types in Finnish EFL classrooms.She found out that although recast is provided as the most frequent type of corrective feedback by teachers, recasts and explicit correction were not the most effective strategies regarding learners' uptake in this study, but other types of corrective feedback such as elicitation and metalinguistic feedback were more successful strategies for learner's uptake. In a different project done by Chen (2011) there was made a comparison between oral recasts and elicitations.The first type was considered implicit and the second was considered explicit feedback to evaluate their effects on the oral prosody.The operation of experimental group was compared with the control group in which there was not applied any corrective feedback in classroom practices.Results have shown that self-repairs or prompts were most effective for L2 learners to improve their prosodic accuracy.In this study, it seemed that recasts and elicitations have some effects on learning the targeted prosodic forms but the results recommended that learners' reaction regarding different corrective feedback types would be different and findings proved that there is a direct relation between corrective feedback type and learners' language proficiency.Lyster & Saito (2010) in their meta-analysis looked into the pedagogical effectiveness of oral corrective feedback on target language development.They found out that corrective feedback has salient effect on the process of language learning.They concluded that the logical results for effectiveness of prompts are larger than recasts.They also found that corrective feedback types are more influential on younger learners rather than the older ones.The reason might be relevant to more pedagogical flexibility of younger learners in learning materials.Ruili (2011) has carried out a study on corrective feedback with various perspectives.Chu tried to find out if there was a positive effect on improving oral English accuracy.Moreover, two types of corrective feedback were investigated to distinguished which type was more effective on English accuracy and if corrective feedback can improve oral English accuracy, the extent of improving will be the same for the high medium and low group of students.Kennedy (2010) investigated the effect of teacher's corrective feedback on two different proficiency levels.Fifteen child ESL learners participated in two different proficiency levels.The results of the study showed that each proficiency group had different types of errors.The low proficiency group's error production was more about content while the high proficiency group's error was more about form.Regarding teacher feedback, the teacher believed that the high proficiency group could do self-correction and the low proficiency group needed more help; therefore, the teacher used corrective feedback type for the low group, which gives the correct form.The study resulted that the high proficiency group had a higher rate of uptake and repair compared with the low proficiency group.Besides, the high proficiency level group might have more linguistic knowledge that enables them to improve their utterances.Ammar (2008) in a study investigated the different effects of recast and prompts for third person possessive determiners among Francophone learners.The participants received corrective feedback while they were doing some communicative activities.The results indicated that prompts were more effective than recasts in the learners' accomplishment of third person possessive determiners.Tomczyk (2013) tried to make a comparison between teachers' perceptions and students' perceptions about oral errors and their corrective feedback.She considers corrective feedback as an inseparable part of language acquisition.The findings indicate that there are differences and similarities in view of the respondents of the two groups in comparison with real situation in the class that showed learners' error correction is not a simple issue and needs to be more investigated. Qiao (2013) investigated the effect of implicit feedback (i.e., recasts) and explicit feedback (i.e., metalinguistic) on American students' acquisition of Chinese directional verb compounds, that it is respective to expressing motion events in Chinese.There were three Chinese groups at intermediate-level of English.The task which was presented among groups was picture description.One group received implicit feedback in order to correct their errors, another received explicit feedback, and the control group received no feedback.A pre-test, an immediate post-test, and a delayed post-test were administered.He observed improvement in both experimental groups' performance on the intermediate post-test, while this result was not observed in control group.As well, the improvement of recast group was stable in the delayed post-test, whereas the improvement decreased in metalinguistic group on the delayed post-test.The results showed a probable short-term positive effect of the two types. Introduction As mentioned earlier, the purpose of the present study was to make a comparison between two teacher correction strategies (i.e., recast and metalinguistic) in Iranian EFL speaking improvement and also to investigate if gender and age play any roles in application of such types of corrective feedback.The method of this study is thus basically quantitative and addresses the following research questions. 1) Which type of teacher correction (i.e., recast and metalinguistic) is more effective in improving learners' speaking? 2) Does gender make a significant difference in the application of the two types of corrective feedback? Participants The sample included 65 male and female learners selected from Nokhbegan institute in Shiraz, Iran.The age of the learners ranged from 15 to 25 (Mage: 18.33).They were selected from the existing population of 95 English learners.They were divided into one control group and two experimental groups placed in three different classes.Class A with twenty students and class B with twenty two and class C with twenty three students. The quick OPT test version two was used in order to run the placement.In fact, the aim of administrating OPT test was to homogenize the participants into the intermediate level.The simple random sampling technique (SRS) was used for selecting participants of this study.According to this sample, all participants had the same chance to be included in the sampling frame. Instruments Multiple data collection instruments were used to obtain comprehensive and useful data and to increase the validity of the findings.The data were collected from IELTS test as pre and post-tests. The Oxford Placement Test (OPT) The Oxford Placement Test reports students' status on a continuous numerical scale.It is a very reliable tool for placing students in courses at the optimal level.This test is validated by Oxford University.If a student were reported to have a score of 61, it means that they are judged to have reached a very low level B2.A score of 71 would indicate a student comfortably at level B2. Pre and Post Tests Pre and post tests were administered to enable the researcher to compare learners' scores after treatment sections and achieve the goal of this research that is comparing the effects of two types of corrective feedback and also checking if there was any significant difference between these two types. The IELTS speaking module was used as the pre-test in order to evaluate participant's prior knowledge.A certified IELTS examiner was invited to run the IELTS speaking interview.The test was interactive and close to real-life context.The time duration for test was 11-14 minutes with three phases: the participants answered questions about themselves and their family.In part two, they talked about a topic.In part three, participants had a longer discussion on the topic.The participants' scores on the IELTS test were measured regarding how accurate participants' pronunciation was.Features such as basic word pronunciation, linked speech sounds, correct sentence stress, and correct use of intonation (rising and falling) were focused by examiner to measure participants' English proficiency in speaking. The IELTS speaking module as the post-test was repeated.This time, the aim was to see possible changes in participant's scores after eight treatment sessions and compare them with pre-test scores Descriptive Statistics Descriptive statistics for age distribution in three groups was presented as follows: The frequency and frequency percent of studied sample according to age distribution in three classes are shown in the table above. Descriptive statistics for gender distribution in three groups was presented as follows: As you can see in the above table, the frequency and frequency percentage of studied sample according to gender distribution in three classes (A, B, C) was calculated. The mean and SD of participants' scores was determined separately in all three classes.The above table shows the mean and SD of participants' scores according to their classes in pre and post-tests.Regarding this table, before presenting corrective feedback types, there was not seen any differences among the participants' scores of each group in pre-test.Nonetheless, the participants' scores in post-test shows a difference among scores of the two experimental groups (A&B) and control group (C) but it cannot be said that this difference is significant and applying corrective feedback type has been effective. The following table indicates the mean and SD of participants' scores according to classes and participants' gender.In the above table, the mean and SD of participants' scores have been shown according to classes and participants' gender in pre-test and post-test.As seen in the above table, there is no difference in pre-test scores of male participants in each class.But this difference is remarkable between the scores of experimental groups (A&B) in comparison with control group (C).This differences also holds true about female participants. The following figure shows participants' scores distribution based on class and gender. Determining Variance Homogeneity The following table investigates the homogeneity of data.In order to investigate data homogeneity, the Levene test was applied.In investigation of homogeneity of the whole test, F=0/006 had significant level of 0/994 that was upper than the acceptable level for rejecting the null hypothesis.Therefore, there is no significant difference among the variance of groups and the assumption of variance homogeneity is stable.The above table shows the effect of intervention of corrective feedback types on the dependent variable or post-test in three groups.Regarding the significant level of F that is less than 0/001 we can claim that applying corrective feedback types to tackle pronunciation error correction is effective.As you notice in the above table, there is no significant differences among the studied groups (one control group and two experimental groups) in pre-test but regarding significant value of F in post-test at level of 0.01, it can be said that presenting corrective feedback types as a method of rectifying pronunciation error has been effective.Nevertheless this table does not determine which corrective feedback type has been more effective. As there was observed a significant difference among all three groups in post-test, therefore, in order to recognize which type is more effective, the researcher made use of the Tukey test.As it is shown in the ANOVA analysis table, there was not any significant difference among the participants' scores of three groups in pre-test. To run the Tukey test, the participants' scores in all three groups have been located in one class.The following diagram shows this issue. Figure 1.Ranking of participants' scores under study in pre-test The X mark on the columns shows that the participants' scores in all three classes belong to one class. The following table shows investigation about the differences among the participants' scores of three groups in post-test.As you notice in the ANOVA analysis table, we observed a significant difference among the participants' scores of three groups in post-test.According to Tukey test, the participants' scores of the two experimental groups belong to one class while participants' scores in control group belong to another class.This means that applying the corrective feedback types were effective but the difference between them is not significant enough that can be judged which corrective feedback type was more effective. The following diagram refers to this issue.The X and Y marks show that the participants' scores of experimental groups are identical but they are different from the scores in control groups. The following table indicates the effect of applying corrective feedback type on participants' pronunciation error based on gender variable of studied groups.As it is shown in the table, class C is considered control group and according to gender of participants, the t value at 0.05 levels, has not been significant.This means that, according to gender variable there is no significant difference between participants' scores in pre and post-tests.On the other hand, according to corrective feedback types, the t value s in classes A and B has been significant separately.This implies that in these two classes; there is a significant difference between the mean of participants' scores in pre-test and post-test.Therefore, presenting two corrective feedback types has been effective on the pronunciation error correction of both males and females. In order to investigate about the difference between the effect of the two types of corrective feedback on males and females in classes A and B, the independent t-test was used.The table above investigates about the effect of participants' gender on corrective feedback types in studied groups.Since there is no significance for F value in Levene test, we can say that in terms of gender, the participant classification is homogenous.Since the value of t at level of 0.05 has not been considered significant, it can be said that there is no significant difference between the effects of the two types of corrective feedback on participants' gender. Discussion and Conclusion It seems that both question of this study were answered based on the results of the statistical analysis.According to results obtained via ANOVA, we observed insignificant difference between the two types of corrective feedback types (i.e., recast & meta-linguistic) on participants' error pronunciation that cannot be considered statistically while significant differences were found between the students who participated in classes, where applied two teacher correction feedback types (recast & meta-linguistic) and those who were considered control group (class C).In line with this study, findings of some studies revealed that recast, elicitation and metalinguistic feedback types are the most commonly used approaches.According to the result of that study in regarding for the mentioned hypothesis, it has been maintained that recast feedback type is the most common approach that is applied by most of teachers in order to teach vocabulary and pronunciation.But metalinguistic is predominantly used for teaching grammar. Regarding the second question of this study, the test results show that there is no significant difference among three groups in terms of the effect of corrective feedback types on participants' gender.In a study launched by Rezaei & Derakhshan (2011), the effect of two types of corrective feedback (i.e., recast and metalinguistic) on task-based grammar instruction was investigated.Additionally In this study the findings revealed that both these two corrective feedback types are effective in task-based grammar instruction but the one which is more effective is metalinguistic feedback type.Achieving such findings might be the explicit nature of metalinguistic. Gholizade (2013) also investigated the effect of recast and meta-linguistic on accuracy, fluency and complexity of speaking performance of males and females.The findings showed that metalinguistic is more effective in speaking accuracy, fluency and complexity.In this study, there was not any significant difference between male and female, similarly to the present study.Achieving this result might be the result of the number of participants; namely, 120 participants attended in her study.The number of participants made a significant mean difference between groups.In the present study, just 65 participants took part and because of this it is not strange that the mean difference between metalinguistic group and recast group is insignificant.Abedi et al. (2015) in a study compared the effect of recast and direct corrective feedback types on learners' pronunciation.That study brought about different results.These two types were presented on two groups.When the collected data were analyzed, in contrast to this study, the findings showed that recast had a significant effect on the learners' pronunciation and the improvement was seen more in group which received recast type.In this study, recast correction outperformed the other types because there are some weak points attached to using direct corrective feedback.Terrell (1985) studied about teacher's direct feedback and came to three reasons to avoid correcting students' error directly: It does not lead to more correct language usage in the future, It may result in negative affective feelings that interfere with learning, It will probably cause students to focus their attention on language rather than meaning. Similar to the methodology of present study, a research was done by Zhuo (2010) which focused on the relative effects of explicit and implicit recasts on the acquisition of English language.The participants of this study were divided into three groups.One group received explicit recast, the other group received implicit type and the third group which was considered control group didn't receive any corrective feedback.The findings of this study showed that all three groups had significant improvement overtime.But the improvement was seen more among explicit recast group. Results of some other studies in line with the current study, revealed that the corrective feedback strategies can make improvement for EFL learners based on the type of produced error and also it was found that explicit types, which metalinguistic type is deemed one of them has a higher impact on language learning and thereby helps to enhance the process of learning better.The justification lies in the fact that EFL learners pay more attention to explicit corrective feedback types. Figure 2 . Figure 2. Ranking of participants' scores under study in post-test Table 1 . Age distribution in class A, B, C Table 2 . Gender distribution in class A, B, C Table 3 . The mean and SD of participants' scores in three classes (A, B, C). Table 4 . The mean and SD of participants' scores according to classes and participants' gender Table 5 . Levene test to investigate homogeneity of data Table 6 . The result of covariance analysis Table 7 . ANOVA test for investigating the difference among participants' scores of each class in pre and post-tests Table 8 . (Tukey HSD)In order to investigate about participants' scores before intervention of corrective feedback types Table 9 . Tukey HSD to investigate the participants' scores after intervention of corrective feedback types Table 10 . Paired samples t-test in to investigate the difference between scores of pre and post-tests in studied groups based on gender Table 11 . Independent t-test in order to investigate the effect of gender on corrective feedback types in studied groups
6,790
2016-11-24T00:00:00.000
[ "Education", "Linguistics" ]
High Throughput Methods in the Synthesis, Characterization, and Optimization of Porous Materials Porous materials are widely employed in a large range of applications, in particular, for storage, separation, and catalysis of fine chemicals. Synthesis, characterization, and pre‐ and post‐synthetic computer simulations are mostly carried out in a piecemeal and ad hoc manner. Whilst high throughput approaches have been used for more than 30 years in the porous material fields, routine integration of experimental and computational processes is only now becoming more established. Herein, important developments are highlighted and emerging challenges for the community identified, including the need to work toward more integrated workflows. Introduction One of the greatest contemporary challenges to targeted materials science is accelerating the time to realize functional materials with desired properties. The obstacles to achieving this goal are multifarious and include factors such as: • Composition space is vast and hence full exploration of combinatorial phase space is intractable whilst blind exploration is inefficient. • The financial and temporal costs of performing physical experiments limit what can be achieved on a practical timescale. • Analysis and characterization of physical experiments is onerous and potentially nonlinear with respect to the number of components. • Identifying descriptors that capture the leading factors that influence the desired property can be elusive. • Computational experiments that could complement laboratory experiments may not predict outcomes correctly. • Establishing feedback loops between the outcome of physical experiments and computational predictions can be non-trivial. In particular, we highlight the growing trend for the virtuous circle where: 1) computer simulation screens are carried out to identify potential candidate materials for synthesis, 2) experiments are performed in a rapid manner, 3) the outcome of the experiments are relayed back to a simulation algorithm, to decide how to modify the experiment or stoichiometry and to improve the property or properties of choice and the cycle continues until some target condition is met. We feel this review is timely as robotic synthesis is becoming more routine, high performance desktop computing and supercomputing are more widely deployed, and the rise and extensive adoption of efficient machine learning approaches to screen materials, identify descriptors or principal components, and set synthetic targets is ever improving. Our aim is to highlight exemplars of major developments in approaches toward the realization of the virtuous circle that exploits the synergy between physical experiment, computational experiment, and the analysis and exploitation of data. Experimental High Throughput Methods for Zeolite Synthesis and Characterization Zeolites are most commonly identified as microporous framework silicates, composition SiO 2 , consisting of edge-sharing tetrahedra, where the silicon atom sits at the centre of a tetrahedron and oxygen atoms are at the vertices of the tetrahedra, as depicted in Figure 1. These materials typically contain channels and/or cages in one, two, or three dimensions with a pore size ranging from ≈4-15 Å, allowing for the transport of small molecules within the structure. By far the greatest uses of zeolites are in the fields of petrochemistry and fine chemicals where they are used to separate oil fractions and catalyze the formation of chemical feedstocks. Although zeolites are most commonly realized as framework silicates there are also germanosilicates, aluminosilicates, aluminophosphates (AlPO) or silica-aluminophosphates (SAPO), borosilicates, and other variants and the composition and stoichiometry of these materials can vary dramatically. For a thorough introduction to the field, the reader is referred to Wright et al. [17] In catalysis, zeolites typically have aluminosilicate composition with protons acting to supplement the charge of Al 3+ to be equivalent to Si 4+ -(Si 1−x Al x H x ). Zeolite synthesis is relatively difficult and can be time intensive as well as expensive [18] due to: • A tailored organic template also known as a structure directing agent (SDA) often being required to achieve a particular topology. • The synthesis often being carried out at quite aggressive pH, initially often 10-11 rising to >14. • The timescale of synthesis ranging from hours to several days. Moreover, bespoke laboratory ware is used-bombs-to make use of autogenous pressure during the synthesis. Another complicating factor is that zeolites are kinetic products rather than thermodynamic products (e.g., all pure framework silicates are metastable with respect to quartz [19,20] ) which makes isolating phase pure samples particularly challenging. Indeed, the reproducibility of syntheses and optimization of synthetic conditions have been longstanding targets for the zeolite community. [21] These complex synthetic challenges are generally not manifest in MOF chemistry, though there is some overlap in zeolitic imidazolate framework (ZIF) chemistry. [22] Some zeolite phases have very narrow regions of stability and can only be made at particular ratio of Si:Al (also often reported as Si/ Al). Exploration of zeolite phase space has, for more than 70 years, involved methodically traversing ternary phase maps, scanning the ratio of Si:Al:M (where M is an alkali metal) to identify stability fields for different zeolites. Naturally, this type of approach to synthesis lends itself to HTS methods and we now focus on major developments in this field. Synthesis and Preparation The array of samples, referred to as the sample library, in HTE should be preferably produced on as large a scale and as rapidly as possible in order to allow for the most expeditious exploration of phase space. To achieve this and minimize the time required to produce each individual sample, HTS must be employed; this section will discuss the implementation of HTS, examples of zeolite discovery through HTS, and finally the robotization of HTS. Initial Method Development Zeolitic sample libraries are formed through a systematic survey of either the gel compositional space or the SDA-phase relationship, with the former first being achieved with zeolitic materials by Akporiaye et al. [23] Although HT methods for parallel protein synthesis had been used before, [24] Akporiaye and co-workers explored the Na 2 O-SiO 2 -Al 2 O 3 phase space for a Na 2 O-SiO 2 -Al 2 O 3 -H 2 O zeolite gel with a specially developed Building blocks for zeolites, separated into their constituent parts: tetrahedral atoms which can be Si, Al, or P, and their linker, a bridging oxygen atom. Two examples of topological nets for zeolites, GME and SOD, are shown in the Topologies column as a) and b), respectively, with their polyhedral representations depicted in c) and d). MOFs with the same topology, and their corresponding building blocks, are shown in Figure 9. multi-autoclave apparatus (Figure 2A). [25,26] The phase diagram produced via this HT hydrothermal synthesis was in partial agreement with the previously reported phase diagram produced by Breck, [27] with the authors attributing slight differences to variations in the water content in the gels formed by Breck. Notwithstanding the differences between the phase boundaries and the lack of formation of chabazite (CHA type) [28] crystals observed by Akporiaye, the general mapping of a particular composition to a particular zeolite topology appeared to agree, implying that HTS with a multi-autoclave was valid. The authors' multi-autoclave enabled up to 1000 different gel compositions to be tested, saving several orders of magnitude of time with the time necessary for just hydrothermal synthesis being reduced from one and a half years to only 6 days. By exploiting the parallel synthesis technique, Akporiaye et al. explored the phase diagram of a more complex gel system containing not only the aforementioned compounds, but additionally some proportion of MeNH3 + , Li + , and/or Cs + . Thus Akporiaye et al. demonstrated that multi-autoclaves could be used to sample vast regions of compositional space with the authors noting that the development of such HT methods would enable novel catalyst identification with greater efficiency and speed. Akporiaye et al. [30] and others [31,32] further demonstrated the transferability of this mass hydrothermal synthesis technique to AlPOs and other heteroatom zeolites. The Maier group notably developed an alternative multi-autoclave that enabled coarse X-ray diffraction (XRD) spectra of the samples to be subsequently taken, thus combining synthesis and characterization in a zeolite HT workflow for the first time in the academic literature ( Figure 2B,C where further details are discussed in Section 2.2.1). [29] The microgram amounts of Ti-containing silicate TS-1 (MFI type) synthesized and then examined by Klein et al. [29] additionally demonstrated the possible prototyping compatibilities of HTS to exhaustively sample regions of interest without prohibitive time or monetary investment. Other HTSs besides hydrothermal synthesis were later devised with Zhang et al. developing an HT vapor-phase transport synthesis method for SAPOs where the dry gel composition-SAPO topology relationship was subsequently studied. [33] Following the pioneering work by these groups and others, [23,29,31,[34][35][36] Newsam et al. [37] noted the particular difficulty with zeolites in specifying which region of the compositional space to investigate. Even nearly two decades on, the translation between in silico zeolite design to a rigorous synthesis procedure is still difficult [38,39] as a reproducible one-to-one mapping of gel composition to a particular zeolite topology and composition remains elusive. Slight experimental variation can lead to significant and non-trivial effects on the resultant zeolite topology and composition, with the relationship between the contents of the gel and the final crystallized product being difficult to establish. To further complicate matters, not all points on the compositional phase diagram relate to crystalline products with amorphous, non-porous, and/or mixed-phase products populating many regions. Thus, faster explorations of unknown phase space are necessary and accordingly automation has been employed to help streamline the HTS process. A) The HTS autoclaves produced by Akporiaye et al. in 1998 to explore the compositional space of an aluminosilicate gel. [23] B) The cross-section diagram of the multi-autoclave employed by Klein et al. [29] C) The related components prior to HTS. A) Adapted with permission. [23] Copyright 1998, Wiley-VCH. B,C) Adapted with permission. [29] Copyright 1998, Wiley-VCH. Zeolite Discovery through High Throughput Synthesis The methodology for these HT discovery studies was succinctly summarized by Stock [53] as: i) define the phase space to be explored and design experiments accordingly, ii) generate a selection of reaction mixtures produced systematically via robotic arms, iii) attempt synthesis with the caveat that not all elements of the library will produce single-phase crystalline products, iv) isolate and separate, and finally v) characterize and/or analyze samples. The work by Corma and colleagues particularly illustrate this workflow with the ITQ zeolites where the synthesis was largely automated through the use of a robotic arm that composed the gel solutions which formed the sample library. When targeting novel ITQ-24 (IWR type) compositions, [47] the authors first noted that the larger Si-O-Ge bond angles found in silicogermanate ITQ-24 were of a similar size to that of Si-O-B bonds. Therefore they hypothesized that beyond a given B 2 O 3 gel composition, B should be able to occupy all the Ge sites in silicogermanate ITQ-24 and thus enable a Ge-free borosilicate ITQ-24 zeolite to crystallize. A HT search of various SiO 2 -GeO 2 -B 2 O 3 gel compositions was performed in order to determine the proportion of B 2 O 3 necessary for the authors to successfully synthesize borosilicate ITQ-24. [47] As trivalent B will impart a negative charge onto the framework, a pure silica ITQ-24 analog would require a lower charge density in the SDA to reduce the number of charged defects being included during crystallization. Therefore, a variety of possible organic SDA (OSDA) candidates were trialled and one was found that produced ITQ-24 with Si/Ge ratios from 2 to infinity (i.e., pure silica), where the a pure silica variant of ITQ-24 had previously been unobtainable synthetically. Screening a variety of possible OSDA candidates with HTS has been performed by many authors [54][55][56] as the SDA is an additional degree of freedom when exploring new possible topologies or compositions. A novel example of this is the discovery of the exceptionally low density ITQ-37 zeolite (ITV type). [50] This silicogermanate has an unusual topology due to the large elliptical 30-member rings and periodic framework interruptions. The framework interruption sites, preferentially occupied by Ge, are terminated with hydroxyls, resulting in T-OH bonds being inherent to the internal structure as opposed to predominantly arising from post-synthetic treatments and defects. Additionally, the novel zeotype, crystallizable as both the silicogermanate and aluminosilicogermanate form, is chiral due to the gyroidal channel formed from the 30-membered rings. Whilst the Corma group had previously selected a particular enantiomer of zeolite beta [57] though a chiral OSDA, [58] the study by Sun et al. [50] is the first example of zeolite discovery by considering a particular OSDA optical isomer, implying that the chirality of an OSDA is a further degree of freedom available for interrogation. ITQ-37 was found to be stable after calcination at 813 K for up to 2 weeks, where crystallinity is comprised afterward, and had triple the initial activity toward bulky aldehydes for catalytic acetalization when compared with the common zeolite H-beta, demonstrating the possible utility for near-mesoporous and chiral zeolites for industry. Moreover, a novel large-pore silicogermanate zeolite ITQ-33 (ITT type) with 10-and 18-membered ring channels was discovered by the Corma group through varying the gel composition as well as the concentration of mineralizing F − present. [49] Whilst the process for discovering this zeolite is similar to prior examples, [47,50] the distinctive feature of this synthesis is the successful targeting of a large-pore zeolite by exploiting the ability of F − to stabilize double 4-membered ring motifs common in large-pore frameworks. [47,59] The corresponding large-pore ITQ-33 found by Corma and colleagues had a large Brunauer-Emmett-Teller (BET) surface area and superior hydrocarbon cracking capabilities than other zeolites. This study further exemplifies how HTS, directed by chemical intuition and prior knowledge, either a posteriori or a priori, can be employed to target specific types of catalysts or physical features. Implementation of Robotics and Automation The next major innovation in HTS was proposed by Caremans et al. [60] who developed an "all-in-one" automated synthesis and analysis system, with the only manual input being the supply of reagents to the robotic dispenser ( Figure 3A) and the moving of filtrate to the ovens. The all-encompassing automated process described by Caremans et al. demonstrated no contamination and was successfully employed in synthesizing zeolite-2, [61] a microporous analog of the amorphous-walled mesoporous MCM-48 material, [62][63][64][65][66] and clathrasil phases, pure silica framework materials where only small molecules are able to pass between cages. [67] Whilst the samples produced by Caremans et al. [60] were less crystalline than the reference material, [61] evidenced by the broader peaks in the XRD spectrum, they nonetheless demonstrated that porous silica materials could be produced and analyzed with minimal human input. Janssen et al. [68] further developed the "all-in-one" automated HT research system proposed by Caremans and colleagues, [60] with Janssen et al. developing an automatic workflow for the: i) dispensing of pre-formed zeolite ( Figure 3C), ii) pre-treatment of zeolites ( Figure 3D), and iii) catalytic tests on modified zeolites. Whilst an impressive feature of engineering, the workflow deployed by Janssen et al. did not perform any zeolite synthesis and used pre-made commercial zeolites, falling short of a completely autonomous "start-to-end" research station. In the subsequent year, Serna et al. [69] produced a more advanced complete workflow for synthesizing, testing and examining Ti-grafted MCM-41 and ITQ-2 for large olefins and methyl oleate epoxidation. The HT platform developed by these authors: i) dispersed the reagents for the post-synthetic Tigrafting and silylation, ii) performed HT XRD, and iii) measured catalytic activity with gas chromatography. However as with the work by Janssen and colleagues, [68] the HT process developed by Serna et al. [69] lacked any HT initial porous material synthesis and required human input to move samples from one HT station to another. These studies demonstrate the difficulty in creating a single HT experimental station and thus starting in the mid-2000s, the community moved toward a combined HTS and data-mining approach with the authors Moliner et al. [42] constructing an artificial neural network machine learning (ML) model based on experimental data to aid in the analysis. Others subsequently employed various ML models to analyze synthesis data [70][71][72] though initially the use of these approaches was limited to examining the pre-collected experimental data and the models were not used for predictions outside of the range of the training set. Later ML models have been successfully employed for novel predictions that have been experimentally verified where further details can be seen in Section 3.4. Whilst other alternative high throughput production methods were developed, [73][74][75] the majority of new innovations are limited to HT structural characterization and activity or HT post-synthetic treatments. [44,[76][77][78][79][80][81][82][83] Notably, there is a lack of development of "full" HT library generation workflows, with current state-of-the-art systems only automating a few portions of the synthesis process or lack robotization in important areas, such as zeolite synthesis or postsynthetic treatment. Also, the automated workstations are often isolated from other stations with manual, as opposed to more efficient robotic, sample transport being required. Characterization The characteristics of a zeolite with respect to its structure, in terms of crystallinity and topological features, and chemistry, in terms of the environments present on and within the material, determine its efficacy and viability. This section will describe both the physical and chemical descriptors that have been employed to investigate zeolitic samples in a HT manner. Structural Proprieties Since zeolites are crystalline materials, XRD was the first method applied to characterize the crystallinity of the [60] B) the robotic arm employed by Moliner et al. to weigh solid reagents and dispense zeolite precursor into the Teflon vials for hydrothermal synthesis, [42] C) the solid zeolite dispensing station, and D) the automated ion-exchange solution dispensing used by Janssen et al. [68] A) Adapted with permission. [60] Copyright 2006, Elsevier. B) Adapted with permission. [42] Copyright 2005, Elsevier. C,D) Adapted with permission. [68] produced from HTS by the pioneering authors Akporiaye et al. [23] and Klein et al. [29] Due to the design of the parallel multi-autoclave, Akporiaye et al. [23] were required to transfer the products to another container before performing analysis. However, the multi-reactor developed by Klein et al. [29] ( Figure 2B,C) contained segregated chambers in the autoclave with a Si wafer placed on the base. Porous rods were then inserted into the microchambers after the hydrothermal synthesis treatment to separate the liquid and solid phases, where further drying was done to the zeolite solid left on the wafer. The zeolite crystals were then sequentially scanned by a focused X-ray beam. Due to the set-up devised by Klein et al., the final products from the HTS were already present on an XRD stage and therefore could be taken for XRD analysis without further transfer or modification. This enabled the spatial component during the initial gel formation to be preserved and thus allow for simple mapping of diffraction spectrum to gel composition. The use of a multi-sample XRD stage, often automated with movement in at least the xy plane or similar, became standard into the new millennium. [31,42,45,46,48,60,69,72] Baumes et al. further innovated the XRD analysis procedure in 2009 through the development of the adaptable time warping (ATW) technique, [84] an ML spectral assignment method which was significantly more reliable in assigning unknown and multi-phase spectra than traditional and other ML methods. The advent of this new method by the Corma group was crucial for screening large numbers of XRD patterns from HTS when scanning compositional space, accelerating the zeolite discovery process. Compositional imaging of zeolitic materials in a nondestructive manner was first introduced by Koster et al. [85] where the authors imaged a AgNa-Y zeolite (FAU type) using 3D transmission electron microscopy (TEM). By rotating the sample 143° and recording the TEM image, a reconstructed 3D representation of the crystal could be made whilst recording the composition of the surface. However, this approach is not appropriate for HT and would become a bottleneck process since the sample would need to be carefully rotated through a given angle interval. Despite developments in automated 3D-TEM imaging [86] and the predictions that this automation could be exploited for HT, [87] 3D-TEM for porous materials is still to be used in an effective HT workflow. If a non-tomographic imaging method is used, then compositional imaging can be employed in a HT manner through micro X-ray fluorescence (MXRF). MXRF involves focusing an X-ray beam to a small cross-section on the order of µm 2 and illuminating a small section of a multi-sample matrix. The emitted fluorescence is then recorded for a particular sample element of the array and after a given time, the sample array is translated in the xy direction to have a different sample exposed to the fluorescing beam, resulting in the presence of Z ≥ 11 elements within the library being recorded. [88] Minogue and colleagues utilized this MXRF to perform an HT screening of zeolites for their propensity and selectivity to Cs uptake from a solution of Cs and other metals in order to determine the optimal sieve for removing Cs from nuclear waste and other solutions containing a large number of cations. [89] Through the use of MXRF, 11 zeolites were screened for Cs + uptake with the CHA type SAPO-34 and other small-pore zeolites having the greatest selectivity to Cs. The authors particularly noted that the HT screening took only a few hours as opposed to weeks, with such a speed-up being crucial for exploring a wider range of samples and reducing labor for each experiment. The utility of porous materials relates to their high surface area, where screening various topologies and treated zeolites for their accessible surface area (ASA) is crucial for determining their efficacy. Groen et al. developed a complete HT post-synthetic workflow followed by a HT BET surface analysis for various ZSM-5 zeolites (MFI type) in order to investigate post-synthetic treatments that maximize mesoporosity without framework degradation. [76] The authors employed a parallel post-synthetic approach with a library of pre-formed zeolites with the desilication treatment variables being the: i) concentration of HF acid or hydroxide base, ii) counter-cation for the hydroxide base, iii) temperature, iv) duration, v) stirring speed, and vi) initial framework Si/Al ratio. The library of samples was manually generated and then placed in a workstation containing isolated cells that allowed for separate temperature and stirring speed control. The novel aspect of this work was the characterization of the samples via N 2 adsorption isotherms conducted in parallel on porous materials for the first time in the academic literature. Si was found to be more readily extracted from the framework compared with Al, with the mesoporosity of the samples increasing as the Si/Al increased. However, the zeolites with the greatest Si content, Si/Al > 50, had a lower mesoporous surface area than the samples with an Si/Al between 25 and 50. Groen et al. attributed this counterintuitive observation to the assistance of Al in the desilication mechanism reported by previous authors. [90] Another notable observation was how the extent of desilication varied with the identity of the alkali metal present in the base; a comparison of Li, Na, and K, showed NaOH yielded the greatest mesoporosity. These authors further improved their HT post-synthetic treatment method in subsequent papers through the use of robotic syringes to disperse the reagents, [78,91] greatly improving the possible speed-up afforded by post-synthetic HT methods. Chemical Properties A powerful technique to determine the internal chemistry of zeolites is Fourier-transform infrared (FTIR) spectroscopy, which can be used to probe the character of bonds present in the system. Snively et al. [92][93][94][95] first developed a 7-chamber parallel FTIR spectrometer that, whilst predominantly employed for metal-alumina supported catalysts, [93][94][95] was shown to be transferable to zeolites. [92,93] However, the main appeal of the device was the increased high-quality spectrum acquisition speed as the apparatus collected seven spectra simultaneously (averaged to produce a final single spectrum) from the same sample, rather than collecting multiple spectra stepwise. [92] The sample chambers could contain different samples, [93] but often the chamber's contents were duplicated so as to reduce the measurement time. The Schüth group further developed this parallelized FTIR method shortly afterward [96,97] but whilst Lauterbach and colleagues initially focused on increased acquisition speed, Schüth and co-workers focused on producing a device that enabled the analysis of a larger number of samples. This was achieved through a widening of the beam to generate a parallel beam that simultaneously shone on multiple samples. After transmitting through a sample, a portion of the resultant beam was re-focused on to a designated block of the detector array assigned to that particular sample. This resulted in 8 IR spectra being measured in parallel with the authors further improving the apparatus to record 49 spectra. [96] Kubanek et al. [98] employed this new HT characterization method to investigate six Pt-ZSM-5 catalysts with three different metal loadings and two different post-calcination thermal treatments at different temperatures. CO was passed over the metal zeolites and the behavior of the Pt-CO adsorption peaks was used to determine the Pt speciation, with Kubanek et al. [98] stating that high loading and temperature treatment causes Pt to aggregate within the zeolite crystal into larger nanoparticles. Since zeolites are solid acid catalysts, it is desirable to characterize them in terms of acidity. Temperature-programmed desorption of NH 3 (NH 3 -TPD) enables a variety of differing acid sites within a sample to be enumerated through the number of NH 3 desorption peaks in a temperature spectrum where the strength of an acid site is related to the temperature required for bound NH 3 to be released. Wang et al. [99] used a multistream mass spectrometer (MS) developed earlier [100] to record the NH 3 released after saturation by following the intensity of the m/e = 16 mass fragment as a function of temperature. The multi-stream MS involved a parallel 80-channel reactor system where gas feeds were sequentially selected to pass through a quadrupolar MS in rapid succession, enabling the NH 3 -TPD of various zeolites of differing compositions and metal loadings to be measured in a HT manner. [100] Woltz et al. [101] produced a similar 6-channel reactor system with a MS attached where, though the use of sequential mass spectroscopy, a HT NH 3 -TPD was taken for Pt-loaded H-beta zeolites that were postsynthetically sulfated with H 2 S followed by oxidation. As with Wang et al., [99] Woltz and colleagues sampled the mass peaks of NH 3 from the post-catalyst gas streams consecutively where the intensity of these peaks was employed to determine the acidity. The Pt/H-beta catalysts were found to be bifunctional catalysts with Brønsted zeolite and sulfate acid sites that enhanced the hydroisomerization of pentane. Another HT method to determine the acidity of a zeolite system was developed by Fischer and colleagues where propene was introduced into various H-ZSM-5 aluminosilicate zeolites of differing Si/Al ratios and the near-edge X-ray absorption fine-structure (NEXAFS) spectra were recorded at a synchrotron. [102] Whilst characterizing a zeolite with synchrotron radiation was not unprecedented, Fischer et al. expanded traditional NEXAFS spectroscopy into a combinatorial form through the collection of X-ray absorption spectra sequentially with the sample array being translated in the xz plane, granting an additional spatial dimension to the NEXAFS spectrum and thus enabling different samples to be measured without superimposing the spectra. The authors utilized the intensity of the carbon K-edge peak as an indicator for the amount of propylene retained, following evacuation, on the zeolite after saturation. Unsurprisingly, increasing the Al content resulted in a greater amount of hydrocarbon absorbed. Therefore, Fischer et al. additionally recorded the normalized C core 1s → π* C=C bond excitation intensity for the chemisorbed propylene. This absorption intensity is proportional to the amount of reactive propene available on the zeolite and thus the authors used this as a measurement of the acidity for a single site; it was found that the acidity for a single Brønsted site increased with decreasing Al content until Si/Al = 150, when it roughly plateaus. Recording specific excitation intensities can allow for the probing of particular interactions, such as the formation of active species as the authors discussed, that other methods cannot indicate in an unambiguous way. Therefore the development of such a method, whilst dependent on substantive knowledge of the system in question, into a HT workflow could allow for the rapid screen of catalysts with respect to reactive intermediate formation and thus indirectly screen based on mechanistic characteristics. The Kaskel group developed an alternative acidity measurement procedure, initially for MOFs (see Section 4.2) and then for zeolites in 2012, using the instruments InfraSORB [103,104] and later InfraSORP. [105] These two pieces of apparatus were based on measuring the thermal response of a gaseous absorbate on a sample with an IR sensor and relating the magnitude of the heat released from the absorption to the acidity. Multiple sample cells with separate sensors were then linked together in an array via a shared gas inlet and outlet channel to enable parallel acidity measurements. The authors Bae et al. [106] examined the absorption of CO 2 in alkali-metal-exchanged LTA and FAU typed zeolites through the use of a multi-channel gas absorption analyzer where each sample chamber contained an independent pressure sensor. Bae et al. reported that Ca-A had an affinity for CO 2 over N 2 . Further gas separation and adsorption work in the academic literature regarding zeolites has been performed primarily with HT computational screens of experimentally realized and hypothetical frameworks (see Section 3.3 for further details) as opposed to physical experiments. Additionally, academic research is now concerned more with larger-pore and less dense materials for gas sorption, most notably MOFs (see Section 5.4.5). Some authors have utilized gas adsorption for other types of characterization indirectly though, as with Fischer et al. where they studied the change in water pressure of separate isolated reservoirs connected to a set of zeolites as a function of temperature; [107,108] the resultant isochores were used to determine the hydrothermal stability of the zeolites where the parallel nature of the measurements enabled the authors to record the stability in a HT manner. Experimental HT academic studies such as these are more common to gas adsorption studies performed on MOFs, where Section 4.2 contains further details. Zeolites have been employed industrially as gas separators for decades; [109,110] however, the majority of recent academic research has been primarily concerned with other materials with greater void volumes. Catalytic Activity As mentioned in Section 1, zeolites are frequently employed as catalysts in the fine chemical and petrochemical industry where their efficacy and stability are of vital importance. In order to maximize the impact of HTS, the largest possible array of samples should be tested contemporaneously so as to minimize the temporal costs of catalyst evaluation. Therefore, HT catalytic testing is desirable for evaluating the sample library with this speed-up being particularly achieved when these HT activity measurements are coupled with the methods described in Sections 2.1 and 2.2. This section will introduce and discuss the HT variants of the traditional gas chromatography/ mass spectroscopy methods use to measure activity as well as non-destructive HT evaluation procedures based on photon absorption. Early Methods of Catalytic Testing Microflow reactors were used far before zeolite HTS became prevalent, [16,111,112] where the first zeolite HT academic study was published by Creer et al. [112] in 1986. Six different catalysts were tested concurrently with up to four different gaseous reagents being available for the catalytic testing during a single run. In the six separate systems, one could vary the catalysts present and maintain the same gas flow or have differing gas composition streams with the catalyst remaining fixed, enabling a vast number of degrees of freedom for exploring the catalytic search space. The products from the reaction were then analyzed with gas chromatography (GC). Whilst there were issues with regards to reproducibility, attributed to non-uniform temperature between the individual reactors, the work by Creer et al. [112] was clearly pioneering as microflow reactors became the standard method by which catalytic activity from zeolites was recorded. Using the system developed by Creer et al., [112] Bessell and Seddon examined the formation of larger hydrocarbons from olefins over H-ZSM-5. [113] The authors utilized an improved version of the parallel microreactor-GC system where a quadrupolar MS was attached to increase the analytical capacity of the system, as the hydrocarbons had similar GC retention times. [113] Bessell et al. found that as the temperature increased, the formation of aromatic products became more favorable and dominated at the expense of paraffin formation; increasing the Al content increased the cracking activity, rationalized as due to an increase in acidity. The use of GC in HT workflows to evaluate activity is ubiquitous and used for a variety of applications ( Table 1). Mass spectroscopy has been employed to measure zeolite activity since 1987. [113] However, this was only for a small number of samples with authors preferring to initially analyze products from zeolite catalysis with GC. Whilst non-porous supports with metals loaded on them had been employing mass spectroscopy to measure activity in a HT manner since 1990, [161,162] it was not until 2003 that Wang et al. [100] developed an HT catalytic testing system incorporating a mass spectrometer (MS) which was used for zeolites. Wang and colleagues developed an 80-channel reactor system where the reaction effluent was sequentially but rapidly sampled. Whilst the MS itself was not parallel, the multi-stream MS could be used for recording the activity of a large number of catalysts in the same run with the throughput of the system reaching 400 catalysts per week. The authors examined the efficacy of zeolite, alumina, and silica for the catalytic condensation of acetone, where measuring the relative intensity of the product mass peak compared to Ar allowed Wang et al. to record the conversion of acetone and the selectivity toward particular products for each catalyst (Figure 4); the H-beta and H-Y zeolites were noted to consume large amounts of acetone but had little selectivity toward any particular product. The multi-stream MS was later employed for further catalysis measurements of various metal-docked H-ZSM-5 zeolites to the conversion of CO and methane to ethene and aromatic compounds [163] as well for a HT NH 3 -TPD method (see Section 2.2.2). [99] Other authors later developed similar reactors that allowed for rapid sequential analysis [120,[164][165][166] and beyond the porous materials field, the authors Casacuberta et al. [167] recently employed a miniaturized 14 C mass spectrometry-based analyzer system [168] to sequentially determine the concentration of CO 2 present in seven seawater samples, where this latter example is of particular note due to the smaller size of the MS enabling a multi-instrument HT workflow. However a parallel, concurrent MS for academic use is yet to materialize and be used for zeolite catalysts. This inevitably results in the number of samples and the time intervals that products can be analyzed being limited. When using an HT MS, this inverse relationship causes an elongation of the time between two products analysis points as the number of effluent streams increases, adversely affecting the temporal component of the activity measurement. Even if the temporal component is not desired, the shorter timescale for spectral acquisition would result in a greater uncertainty in the intensity measurements and would require more careful experimental planning to ensure the reagent exposure time before measurement is consistent. (9 of 47) © 2020 The Authors. Published by Wiley-VCH GmbH formed through sequential ion-exchange procedures, first with a Cu(NO 3 ) 2 solution and then with a (NH 4 ) 2 OsCl 6 solution for 2 days each where CuOs-13X displayed greater activity to NO abatement for all temperatures and maintained significant activity over a wider temperature region than Cu-ZSM-5. Moreover, CuOs-13X retained a majority of its activity after steam aging for 36 h at 450 °C. The discovery of this more effective catalyst clearly demonstrated the value of HT methods as the mixed-metal nature of the catalyst, with 10% and 1% exchanged Cu and Os, would have resulted in its discovery only through serendipity or a search of the composition phase space, with the latter being possible through HT testing due to the large number of candidates that can be evaluated. Many authors following Ozturk and Senkan applied a similar HT catalytic testing procedure to find optimal catalysts for a variety of predominately petrochemical reactions. [173][174][175][176] Whilst the implementations of GC and mass spectroscopy above record the final products of a reaction, an alternative way of interrogating a catalyst and particularly the mechanism is through the use of temporal analysis of products (TAP), developed by Gleaves and colleagues in 1988. [177] This method involves a very short gas pulse being injected into the reactor and then rapidly evacuated on the order of milliseconds. The post-reactor gas stream is then sampled by an MS, for example, to evaluate the intermediate products formed just after contact with the surface. The utility of TAP arises from its capability to analyze intermediate products formed during the initial contact stage where the formation of such products can give insight in the mechanism and pathway by which catalysis occurs. Van Veen et al. were the first to produce a parallel reactor with 12 channels that allow for concurrent TAP experiments where the authors investigated the absorption enthalpy of ethane on 11 different zeolite samples simultaneously. [178] The reported values were acknowledged by the authors to be an underestimation of the previous literature values, though they justify the discrepancy by stating that their rapid screen measurements could be partially affected by the differing diffusion rate of adsorbates within the crystals, where these difference would be significant given the small time window of the measurements. This does suggest that careful calibration of pulse length is necessary to ensure that the results are not governed by mass transport effects, which will be of greater importance when comparing porous materials with differing channel and pore sizes. Photoluminescence and Photon-Based Spectroscopy As opposed to GC and mass spectroscopy methods, where the final desorbed product is physically recorded, photon-based analysis offers an alternative non-destructive way of interrogate the products or intermediates. Due to the samples and products not being consumed during the measurement, other analysis procedures can done following the methods outlined below. Since these techniques allow for the products to be subject to multiple measurements, they are ideal for HT systems as a single effluent stream can yield various spectra detailing different behavior, maximizing the data produce from each sample in the library and thus improve throughput. . The product distribution of recorded products for the catalytic condensation of acetone over the sample library investigated by Wang et al. [100] using the HT multi-stream MS developed by the authors. The products recorded were: isobutene (IBE), acetic acid (AA), isophorone (IPHO), mesityl oxide (MO), and mesitylene (TMB) (detailed reaction scheme and product structures can be found within ref. [100]). Reproduced with permission. [100] Copyright 2003, American Chemical Society. A novel method of activity measurements on zeolites was proposed by Su and Yeung [179,180] named laser-induced fluorescence imaging (LIFI) where this measures activity with fluorescence. Whilst initially applied to V 2 O 5 catalysts, the procedure involved the samples being deposited separately on a steel plate and then placed in the reactor. The reagents then are passed over the samples with the system being simultaneously illuminated with a laser. The fluorescence of the product, naphthoquinone for the studies by Su et al., [179,180] is captured by a camera, enabling both spatial and temporal measurements to be performed. Whilst this is a a relatively simple experiment to perform and analyze in a HT manner, this method requires the product to fluoresce and thus has limited application outside a few specific reactions. Nevertheless, the use of HT photoluminescence to record activity has been used by authors such as Atienzar et al. [181] to screen various alkali metal zeolites for activity toward the formation of the phenylenevinylene oligomer, demonstrating the use of such an approach for certain systems. The use of UV-vis to determine the efficacy of a zeolite was first performed by Gao et al. [182] where the authors developed a multi-flow reactor with a UV adsorption cell that the effluent gas passed through after catalysis. Through the use of a unidirectional translation stage to move the UV cells, the products from up to 80 catalysts could be sequentially analyzed in quick succession and thus result in a rapid and HT testing of the catalysts. The authors employed the set-up to investigate the NO abatement of a series of mixed-metal exchanged H-ZSM-5 samples by recording the absorbance at 214.5 nm, the highest wavelength absorbed by NO, in the post-catalyst gas flow. Whilst no combination of Co/Ce or Ce/In ZSM-5 catalyst outperformed the industrial standard at the time, Pd/Al 2 O 3 , Gao and colleagues nevertheless demonstrated an alternative manner for measuring activity. However, this approach is inherently hindered by the translating stage, forcing the samples to be slightly delayed or the measurement points to be shifted by some amount, reducing the comparability of the resultant spectra. Moreover, the use of UV requires the reagents or products to have a non-overlapping peak that is easy to measure and relate to the concentration, which becomes increasingly difficult as the number of species in the effluent flow increases due to multiple products being produced. Furthermore, such a technique would be impossible to use if no absorption even occurs in the UV-vis window. HT UV-vis spectroscopy is not limited to exclusively recording products from catalysis and has been successfully employed in 2010 by Horcajada et al. [183] to indirectly measure the toxicity of several Fe-containing MOFs in parallel with a UV-vis active dye after exposure to said MOFs (see Section 4.2 for further details and discussion related to use of UV-vis spectroscopy to characterize and evaluate MOFs). FTIR had initially been used in a HT manner to only characterize zeolite catalysts (see Section 2.2.2) but in 2003, the Lauterbach group applied their HT FTIR spectrometer [92] to record the nitrous oxide storage capabilities for metals on γ-alumina supports using a 16-channel system. [184,185] Fickel et al. later employed this set-up to interrogate Cu-zeolites for NO x abatement in 2011 where the Cu small-pore zeolites were seen to be more robust and maintained greater activity after ageing than the comparative Cu-ZSM-5 zeolite. [186] The Claus group developed an alternative HT gas probe that sequentially screened first metal-alumina catalysts and then poisoned Fezeolites for NO x reduction activity in 2003 and 2010, respectively. [79,187] This procedure involved a gas probe being translated over a honeycomb monolith, where the channels contained different catalysts, and then lowered into the desired channel with a gasket being used to prevent the gas escaping to another channel ( Figure 5). The products were then subsequently analyzed by an on-line FTIR spectrometer. Kern et al. [79] examined the effect of post-synthetic treatments of the various Fe-zeolites where it was found that the introduction of Cr or Cu resulted in an increased production of N 2 but concurrently increased the amount of the undesirable greenhouse gas N 2 O produced, first only to a mild degree at low temperatures but then doubling at the higher temperature of 450 °C. This is a particular concern as the exhaust gas that the catalyst operates on is projected to be at or above this temperature for the majority of the vehicle's journey. Moreover, the addition of these transition metals resulted in a greater amount of ammonia slippage, the amount of NH 3 reducing agent which does not react and would be expelled from the vehicle exhaust, where these metals result in a less efficient catalyst and increases the toxicity of the automotive emission. Additionally, the authors compared the deactivating effects of alkali and alkali earth metals on the Fe-MFI and the commonly used V 2 O 5 -WO 3 /TiO 2 catalysts where it was found that the MFI type zeolites were more resistant to this poisoning. Further evaluation with UV-vis spectroscopy determined that rather than causing a change in Fe speciation or affecting the Fe's reducibility, the alkali cations simply blocked some of the channels and pores. This study therefore demonstrates the efficacy of HTE as the effects of a wide range of different post-synthetic ageing treatments could be evaluated in a relatively short amount of time, about 20 h for 128 samples, and therefore allow for the durability and resistance of catalysts to be reported along with their effectiveness. Prior to the work by Kern et al., [79] thermography was employed for screening various catalysts. Cypes et al., [188] for example, investigated the activity of various catalysts, including mordenite, toward CO oxidation. The activity was recorded through an IR camera that imaged a 16 × 16 array as CO in air was passed over the samples and registered the change in temperature associated with the catalyst oxidizing CO. Whilst thermography can rapidly screen catalysts, with the Maier group commonly employing this, [189] it is unable to give any information with regard to selectivity or prevalence of side reactions as with full FTIR spectrum. Critical Remarks Despite HTS being able to traverse vast regions of phase space, phase space is effectively infinite and thus a small region must be specified for investigation. However, identification of compositional ranges that are expected to contain novel topologies or contain samples with desirable features is difficult. [38] Therefore, contemporary HT zeolite work focuses predominantly on a targeting or screening via computational methods prior to synthesis (see Section 3) with novel zeolite discovery through HTS, for example, becoming increasingly infrequent besides a few instances. [39,56] Beyond HTS, HTE focuses on optimization or the benchmarking of catalysts with this avenue similarly involving simulations to pre-screen and flag samples for further investigation. Such developments are to be expected, and indeed welcomed, due to continual increase in computational capabilities coupled with experimental unexplored phase regions increasing in complexity and dimensionality as time increases. With regard to automation and mechanization, synthesis, characterization, and catalytic evaluation either have only some sections automated [49,69,145] or if all desired procedures are robotized, then there is never a system of transferring the components from one station to another without manual intervention. [60,68] This results in a major bottleneck inhibiting massive scale HT and prohibits self-contained virtuous circles from being formed. Whilst many recent developments have increased the speed of analysis or enable in situ analysis, [190][191][192][193][194] these methods are still done on samples sequentially rather than concurrently where parallelizing these new methods would enable a substantial speed-up of current HT workflows. Despite the ubiquity of robots employed for experimental interrogations of zeolites, linking HT synthesis to characterization and activity measurements still remains a significant challenge, demonstrated by the use of pre-formed zeolites in characterization and activity workflows, preventing a "full-cycle" synthesistreatment-characterization-test workflow for a defined region of parameter space with minimal human input. Experimental HT work with zeolites is a relatively mature field with HT catalytic testing and HTS being ≈34 and ≈22 years old, respectively, whilst comparable work with other porous materials such as MOFs has only developed in the last 15 years (see Section 4). As a consequence of the developed nature of zeolite HTE and its pervasive use in industry (where HT autoclaves, XRD systems, and reactors are available to purchase), simulation and work with other newer porous material has dominated the literature for the past half-decade save for notable examples highlighted here. Nevertheless despite the plethora of work relating to zeolite HTE, issues regarding automation mentioned here continue to plague both particularly zeolite HT as well as HT in general. Computational Screening of Zeolites Zeolites are valued for their exceptional shape selectivity, making them highly applicable to many industrial applications. High throughput computational screening (HTCS) of these materials allows for the rapid identification of potentially industrially relevant structures, bypassing the expensive and time-consuming nature of experiment. Structure-property relationships vital to the understanding of thermodynamic and transport properties can be determined through computational work, which can help to identify optimal compositions. These screening approaches are facilitated by both experimentally determined and computer-generated structures, available in different forms as described in Section 2. Databases To assist the high throughput screening of siliceous zeolites, there are a number of structural databases available, the most well known of which is the IZA Structure Commission database (http://www.iza-structure.org/databases); [195] synthetically reported structures undergo a refinement process through the fixing of unit cell lengths and adjustment of atomic coordinates through least squares refinement. As of April 2020 there were 248 distinct topologies reported in the database, each of which has been given a three-letter code derived from the name of the corresponding synthsesized material. The three-letter codes refer only to the connectivity of reported structures, and not to any particular material or composition. All topologies have an idealized siliceous form associated with them, despite many of these structures being currently inaccessible synthetically in that form; Crystallographic Information Files (CIFs) are readily available from their website for these idealized structures. Typically this database is the first port-of-call in a high throughput screen, as materials with the described connectivity have known synthetic pathways. The Deem database is an example of a hypothetical set of siliceous structures; [196,197] other such databases exist, with different methods for generating the structures, such as that of Foster and Treacy. [198] The structures in the Deem database are generated via a process of simulated annealing followed by structure optimizations using both the Sanders-Leslie-Catlow (SLC) [199] and the van Beest-Kramer-van Santen (BKS) [200] force fields. In total there are 2.6 million distinct structures present in this database, all of which are available in CIF form containing their topological information, as well as associated energies for the structures calculated using the aforementioned force fields. Although millions of structures have been generated using this method, a subset of only ≈330 000 are within +30 kJ mol −1 /Si of alpha-quartz (the most thermodynamically stable phase at ambient temperature, see also Henson et al. [19] ) when using the SLC force field, which defines the upper bound of energy for known zeolites. It should be noted that here the generation of structures is done in such a way as to maximize the number of energetically favorable structures by use of the ZEFSAII program, [201][202][203] and thus this database contains by far the largest number of such favorable structures. Owing to its extensive quantity of thermodynamically favorable structures, and the availability of CIFs through the Predicted Crystallography Open Database (PCOD) [204] and the Atlas of Prospective Zeolite Structures, [205] the Deem database remains one of the most widely explored hypothetical database of zeolitic structures for use in high throughput computation. AlPO-zeolites where the tetrahedral (T) sites are occupied by aluminum or phosphorous atoms in an alternating fashion-have also been explored in a high throughput manner. Innovative work by Li et al. generated over 80 000 hypothetical AlPO structures by variation of the stacking sequence of six rings, resulting in two newly synthesized materials; [206] this work led on to further high throughput studies by the authors, examining the necessity for heteroatoms in their database of structures. [207] Although useful, this database is far less extensive than the Deem database owing to its method of construction, as it does not explore the varied composite building units that zeolites can be made from. Treatment of Aluminosilicate Structures Currently there is no database that allows for the rapid screening of aluminosilicate zeolites (although some 43 000 structures were assessed but not made publicly accessible in a study by Muruoaka et al. [208] ). Databases that do contain aluminosilicate structures, such as the IZA, typically use partial occupancies to describe the aluminum and charge compensating cation positions, which creates difficulty when attempting to use these structures in a screening approach. A commonly used method when studying aluminosilicate structures is to apply Löwenstein's rule of aluminum avoidance [209] to describe the aluminum distribution, which states that Al-O-Al linkages are unfavorable; another such assumption is to place aluminum atoms in the structure according to Dempsey's rule, ensuring maximum separation of aluminum atoms within the structure. [210] These methods are employed in order to capture the chemistry of functional zeolite catalysts, which remains one of the greatest challenges in zeolite science to date. Significant modeling difficulties arise from the effects of SDAs, hexacoordinate aluminum, and extra-framework cation and aluminum disorder; the combinatorial nature of these hindrances makes the structural elucidation of zeolite catalysts costly. Toward this goal, Fletcher et al. [211] performed the first exhaustive periodic DFT study on a fully enumerated data set where each unique structure associated with a particular aluminum content was evaluated. Surprisingly, it transpired that when the charge compensating cation is a proton, the lowest energy configuration is associated with a non-Löwenstein structure. Conversely, when the counter-cation is a common alkali metal, Löwenstein structures are the most stable form. Recent computational studies by Heard et al. [212] have shown that the presence of water suppresses the driving force for the formation of non-Löwenstein Al-O-Al linkages. Whilst these purely DFT studies are insightful, it is currently not practical to carry out a full DFT ranking of important catalyst structures like ZSM-5 due to the combinatorial explosion and huge compute cost (for ZSM-5 with the MFI structure type there are 24 distinct T sites in the monoclinic form). However, recent work by Evans et al. [213] has reported a new machine learnt aluminosilicate model that is similar in cost to a force field but presents accuracy comparable to DFT. Remarkably, this machine learnt model, trained on ≈1600 DFT optimized crystal bulk configurations was able to blind predict the correct ranking of protonated mordenite geometries and surface slabs of chabazite-neither of which featured in the training set. The advent of such models heralds the possibility of identifying a manifold of plausible and low energy structures that are likely to be redolent of real catalysts for the first time. Screening Techniques For the screening of microporous materials such as zeolites, there are a number of possible techniques that can be used with varying degrees of accuracy and computational cost. These range from purely structural and geometric considerations to higher-level techniques which involve calculating the precise energetics of the system to rank their stability. Textural and Geometric Characterization Nanoporous materials such as zeolites and MOFs excel at the adsorption and separation of molecular species, and so a useful first step is to calculate the relevant textural/geometric properties that would impact these processes, such as pore limiting diameter (PLD) and largest cavity diameter along the free path (LCD). This can be done relatively cheaply in computational terms, and several programs are available that make evaluating these measures very rapid: Zeo++ is a widely cited program, and is able to calculate these properties as well as others using Voronoi decomposition, which produces a graph representation of the free space, allowing the resulting network to be analyzed for features of interest. [214] Poreblazer splits the structure into cubelets and identifies which of these overlap with framework atoms, allowing the free space to be assigned as those cubelets with no overlap; [215] the Hoshen-Kopelman cluster labeling algorithm is then used to analyze the connectivity. [216] A novel approach to examining pore geometry was developed by Lee et al. whereby unique fingerprints of nanoporous materials are generated based on their channel and pore systems. [217] Example fingerprints are shown for two zeolites in Figure 6. Comparison of these fingerprints between different structures allows identification of structures with similar pore geometries, which allows comparison between different zeolites and also allows MOFs or other nanoporous materials with similar pore geometries to be identified. Techniques such as this could prove very revealing in correlating the features of known data sets to a wider range of nanoporous materials, allowing for the identification of high performance porous networks of a different class. Similarly, a geometry-based descriptor has been developed by Martin et al. to capture the shape and size of a material's accessible void space in a 3D vector, termed a Voronoi hologram. [218] Combining this representation with a modified similarity coefficient and dissimilarity-based selection algorithms allows for diversity selection on a given set of porous materials; use of such techniques can allow for systematic reduction in the number of structures investigated in a high throughput study without resorting to random selection. Computational Methods The use of molecular simulations to predict thermodynamic and transport properties is valuable for the characterization of nanoporous materials. Highly accurate approaches such as those based on quantum mechanical calculations can be used to compute electronic structure properties of the system. However, when employed in HT workflows, these methods are generally restricted to relatively small sample sizes due to their high computational cost. Simulations based on classical mechanics [219] are typically used in high throughput workflows due to their ability to determine properties of interest while striking a balance between accuracy the zeolite structure in red, and the randomly sampled points to identify the pore structure in blue. The 3D pore network of PCOD8331112 can be seen by its singular long interval in the 0D barcode, suggesting that the pore system is connected. The disjointed nature of the pore system in DON can be seen by its eight long intervals in the 0D barcode, suggesting that eight separate channel systems exist. These conclusions are evident by inspection of the structures on the left. Reproduced under the terms of the CC-BY Creative Commons Attribution 4.0 International license (https:// creativecommons.org/licenses/by/4.0). [217] Copyright 2017, The Authors, published by Springer Nature. and execution time. A variety of techniques exist for the calculation of adsorption properties; two commonly used schemes are grand canonical Monte Carlo (GCMC) and its extension, configurational-bias GCMC (CB-GCMC), [220] which allow for the prediction of thermodynamic equilibrium properties such as heats of adsorption, adsorption isotherms and isobars, and the selectivities of mixtures. [221] GCMC simulations are typically used for the modeling of rigid adsorbates, while CB-GCMC can be employed for the treatment of flexible molecules in order to generate distributions of adsorbate conformers. Molecular dynamics (MD) simulations are commonly used to calculate transport properties such as diffusion coefficients, although other approximations can be made to determine these characteristics as mentioned later in Section 3.3.2. [222,223] We note that free energies of adsorption can be calculated from molecular dynamics simulations, however, the typically slow convergence of averaged ensemble properties such as adsorption energy means GCMC and CB-GCMC are usually more efficient. Framework Flexibility Modeling framework flexibility can have a considerable impact on the calculation of properties, however lattices are often modeled as rigid to reduce computational cost. This has been shown to be a reasonable assumption under certain conditions; a study by García-Sánchez et al. [224] highlighted that the effects of framework flexibility on adsorption properties were small, whereas their influence on transport properties was more pronounced. These findings were investigated further by Krishna et al. [225] who determined that framework flexibility had no influence on self-diffusivities on an examined subset of cage-type zeolites interconnected by 8-membered rings. More recently the separation of ethane/ethene was investigated by Bereciartua et al. [226] using ab initio MD, which showed that framework flexibility is key to this separation process. Similar consideration is needed when treating MOFs; a recent study by Witman et al. [227] revealed significant changes in the separation ability of MOFs when framework flexibility was taken into account, leading to the conclusion that its employment is necessary to accurately rank the performance of structures. Based on these findings the necessity of modeling framework flexibility is dependent on the structures and the sensitivity of the processes under investigation, and so neglecting flexibility carries a potential significant error. Applications This section highlights high throughput approaches applied to the identification of ideal zeolite structures for a variety of purposes, as well as the unique descriptors discovered in these studies that allow for the rapid screening and optimization of structures. Adsorption The storage of gases such as hydrogen or methane, for use as fuels, is an active area of research. However, a major drawback to these technologies is the relatively low volumetric energy density while in gaseous form. Carbon capture is another topic garnering significant attention currently, due to the ever increasing impact of the greenhouse effect; [228] capture and sequestration of greenhouse gases such as carbon dioxide is thus of urgent interest. Nanoporous materials offer a potential solution to these problems by adsorbing the gases within their pores, permitting a higher density of the stored gas at ambient conditions. Zeolites, being one of the most widely studied classes of nanoporous materials, have been proposed as promising candidates for this application. Carbon Sequestration: A study by Kim et al. on the uptake of CO 2 in siliceous and aluminosilicate zeolites showed interesting trends between free volume, largest included sphere, and uptake; [229] a range of 8 < LCD < 13 Å for the largest included sphere was shown to be optimal to promote higher uptakes. Beyond this point the free volume inside the pores increased, but a lack of available adsorption sites caused an overall decrease in uptake. The IZA [195] and Deem [197] databases were both used in this study, and the challenge of screening aluminosilicates was overcome by the generation of structures with the random replacement of silicon with aluminum while obeying Löwenstein's rule. The force field developed by García-Sánchez et al. [230] was employed for the simulations, which has been shown to reproduce experimental isotherms across many topologies, and improves on previous models by allowing for the free movement of extra-framework cations. The results of the study allowed interesting correlations to be uncovered between adsorption of CO 2 in siliceous zeolites to that in aluminosilicate structures; the two main descriptors found to increase CO 2 uptake were topologies with a large free volume, and those with the greatest fraction of framework-framework atom distances between 3 and 4.5 Å. Incorporation of descriptors like these into a high throughput workflow allows for a drastic reduction in compute cost; promising aluminosilicate structures can be identified rapidly without the need for the calculation of properties at varied Si:Al, which drastically increases the number of possible structures. More recently a study by Fang et al. sought to develop a transferable force field fit to energies from density functional theory and coupled cluster calculations. [231][232][233] By fitting to high-level quantum mechanical data, a broader description of the energy landscape can be obtained for areas that may not be sufficiently sampled by adsorption experiments, making the force field more transferable to a wider range of applications. The data set contained a combination of siliceous and aluminosilicate structures with both sodium and/or potassium as the extra-framework cation, allowing for high throughput identification of promising structures and identification of the optimal Si:Al ratio for adsorption across a range of topologies. Similarly to the study by Kim et al., [229] aluminosilicate structures were generated by random substitution of Al in place of Si while obeying Löwenstein's rule, although the only topologies investigated were those from the IZA database with verified experimental aluminosilicate analogs. Zeolites with a range of ring sizes were investigated with a focus on 10-membered ring structures and those with large pore volumes in order to avoid possible pore blocking by guest molecules. Data obtained from these simulations showed that aluminosilicate structures adsorbed greater quantities of CO 2 than siliceous structures due to the presence of strong interactions between the CO 2 and the extra-framework ions. However, siliceous structures displayed a higher working capacity due to these interactions preventing the adsorbed molecules from desorbing and leaving the framework. Methane Storage: A wide range of nanoporous materials were investigated in a high throughput study by Simon et al. in order to identify their performance limits for storage of methane. [234] The goal was to identify structures with the highest deliverable capacity of methane at different pressure swings: 65 to 5.8 bar, and 35 to 1 bar. In this study over 135 000 hypothetical structures from the Deem database [197] were tested, as well as 187 structures from the IZA database. [195] Pores that could adsorb methane but were not accessible to the gas phase were blocked, giving a more realistic description of the maximum loading. [235] The two most promising candidate structures identified from this study were both hypothetical structures, highlighting the value of screening such databases. The two structures exhibited very high working capacities of 200 and 172 volumes of gas per volume of material, at the storage pressure of 35 bar, and discharge pressure of 1 bar. Key properties were identified to guide the search for new promising materials: a high density of adsorption sites, a moderate heat of adsorption, and adsorption pockets positioned optimally for guest-guest adsorption. Further to this, Krishnapriyan et al. recently published a study using this data set to train a machine learning model for the prediction of adsorption properties using different descriptors. [236] Commonly used descriptors were used as a "baseline" for comparison: LCD, PLD, ASA, accessible volume (AV), and crystal density; topological descriptors were then computed by use of persistent homology, allowing vector representations of channels and voids in the materials to be determined. The use of random forest regression to predict the methane adsorption isotherms allowed for importance values to be calculated for each descriptor, distinguishing this work from others by demonstrating how the topological features complemented the baseline descriptors for an overall more accurate model of adsorption. The results of this study showed that at low pressures, the topological descriptors calculated through persistent homology, specifically the 2D topology, performed significantly better than baseline features, with these baseline features dominating at higher pressure, as shown in Figure 7. Machine learning approaches such as this can provide in-depth knowledge of structure-property relationships that govern the capabilities of nanoporous materials to perform well industrially; use of descriptors identified in such studies can aid in the rapid screening of zeolites for many applications. Separation Although zeolites have been successfully screened for storage applications as highlighted above, MOFs generally perform better for these tasks owing to their relatively low crystal density in comparison to zeolites, affording guest molecules more free space. [237] This naturally restricts zeolites to the adsorption and separation of smaller molecules, and so they are instead extensively used in the shape-selective separation of isomers and mixtures of low molecular weight adsorbates; the ability to tune their pore and channel systems allows them to be optimized for a variety of such applications. High Throughput Separations of Binary Mixtures: High throughput screening of zeolites and MOFs was undertaken for the separation of a range of binary mixtures at different temperatures by First et al. [223] The methodology employed was an interesting intersection between structural and molecular dynamics approaches, allowing for the rapid screening of structures for a range of purposes; first, energy barriers were calculated for the movement of the molecule under consideration through each pore, allowing for a pathway to be constructed through the periodic lattice. The energy of a pathway was then taken to be the maximum energy of any pore along this pathway, with the assumption being that transport is limited solely by this bottleneck. Boltzmann factors were then calculated for the movement of a molecule through the pathways of a framework, with the difference between two different molecule's Boltzmann factors representing a framework's separation ability. Using this methodology, 196 siliceous zeolite structures from the IZA database, and 1690 MOF structures were screened for their separation ability for eight different processes of interest, allowing for the identification of many high performing topologies that had not before been considered. Trends in separation ability with respect to temperature were also investigated, revealing important failings in the ability to discriminate between similar shaped molecules as temperature was increased. The benefits of large-scale computational screening are illustrated succinctly in the work of Kim et al. on the separation of ethane/ethene mixtures. [238] Here a typical HT approach was employed to screen both the IZA and Deem databases, using textural characterization as an initial screen followed by random selection from the filtered set in order to reduce computational cost, while still exploring a representative portion of the data set. A performance metric was determined to be the product of selectivity and working capacity, in order to find a structure able to refine the mixture to a given purity in fewer cycles. . Image depicting the relative importance of topological descriptors (1D and 2D topology) over the baseline features described above, illustrating the need for the development of tools to extract more sophisticated descriptors. Reproduced with permission. [236] Copyright 2020, American Chemical Society. The best performing structure based on this metric was further examined, and the structural features responsible for this performance were determined; by identifying other topologies with a similar framework geometry to the leading candidate structure (see Section 3.2.1), 90% of the optimum separation candidates could be determined without the need for simulation. This illustrates the power of high throughput computation in elucidating structure-property relationships, which in turn allows for the identification of metrics for the rapid screening of large databases. Further to this work, CO 2 membrane separations were studied [222] using a similar work flow, although with different selection criteria: for a given separation a minimum selectivity was identified, and from the pool of structures which surpassed this boundary, those with the highest permeability were deemed to be the best performing candidates. Permeability was defined here as the product of the solubility and the diffusion coefficients of the gas molecules. Diffusion coefficients were calculated rapidly without the need for expensive MD calculations, by using transition state theory; peaks and troughs in the free-energy profiles of channel systems allowed diffusion coefficients for individual channels to be calculated, which in turn allowed for diffusion coefficients in a given direction, as well as self-diffusion coefficients, to be calculated. The use of such techniques allows for the exhaustive search of a manifold of materials, unveiling key descriptors for the process under consideration; here it was found that a relatively low heat of adsorption and an intermediate Henry constant were optimal for CO 2 /CH 4 separation. Application-Specific Separations: Complementary to the carbon capture work referenced earlier, [229] a high throughput study into the separation of CO 2 from flue gas was undertaken by Lin et al. [239] Flue gas is the gas produced by power plants, and is approximated as a mixture of N 2 and CO 2 ; separating and sequestering the CO 2 from this is crucial to limiting the ever-increasing amounts of greenhouse gases in the atmosphere. Siliceous and aluminosilicate zeolites, as well as ZIFs, were examined for their separation capabilities; siliceous structures were taken from the IZA and Deem databases. Aluminosilicate forms of the these structures were generated by obeying Löwenstein's rule, yielding one structure with an Si:Al ratio of 1, and aluminum distributions between this and the siliceous analog generated by random replacement of silicon with aluminum to yield ten structures; calculated properties for these ten structures were averaged to give a representative value for a particular Si:Al ratio. ZIF structures were generated by taking a siliceous zeolite structure and scaling the unit cell by 1.95, then replacing Si with Zn, and O with the imidazole group. A remarkably complete picture of the flue gas separation process was presented, with a novel metric, the parasitic energy, introduced to discriminate between candidate structures. The parasitic energy metric was calculated to be the energy required to: heat the material, supply the heat of desorption, and pressurize CO 2 to 150 bar. The process investigated was a temperature-pressure swing whereby the temperature could be increased, the pressure decreased, or both, in order to allow desorption of the CO 2 . By calculating the parasitic energy, key trends were extrapolated from the data, showing a nonlinear relationship between Henry constants and parasitic energy. This work is crucial in illustrating the importance of modeling an entire process as opposed to a single property, as other metrics may not take the energy required for desorption into account. Another key observation from this work is the abundance of hypothetical structures in the basin of optimal structures, many of which were identified to have low densities, suggesting that these may be key targets for synthetic work. The separations of methane from coal mine ventilation air and low-quality natural gas mixture feeds were investigated by Kim et al. in order to identify new materials for these processes. [240] A significant challenge in the capture of methane through separation processes, is finding a material with a preference for adsorption of the non-polar methane molecule over other constituent gas molecules in the mixture feed, for example, CO 2 . Aluminosilicates have been seen to not lend themselves well to these challenges, as the presence of cations in the structures creates strong binding sites for CO 2 . Thus, 87 000 hypothetical [197] and 190 IZA siliceous zeolite structures [195] were explored for their use in these processes using the force field of García-Pérez et al., [241] with the D2FF force field of Sholl et al. [233] for further validation of leading candidate isotherms. For the purpose of capturing methane from low-quality natural gas feeds, the zeolite SBN was identified as the most promising candidate due to its topological disposition of allowing optimal CH 4 -CH 4 separations of 4 to 4.6 Å between the molecules at high pressures, facilitated by the shape and size of the pores. The adsorption isotherm shows that the separation ability of SBN for this process is pressure sensitive, as at lower pressures the higher binding energy of the more polar CO 2 molecule contributes more significantly to uptake. Similarly for the dilute concentration source examined, coal mine ventilation air, a strong correlation between Henry's constants and uptake values was seen due to the process occurring at 1 bar. It was found that leading materials possessed a narrow channel system rather than cage-like channels, as identified through calculations of the largest included sphere along the free path. These narrow channels allowed an increased number of oxygen atoms within close vicinity to the centre of binding sites. Two known zeolites, ZON and FER, were proposed as candidates for this process due to their large Henry constants and high CH 4 /CO 2 selectivity, although the greater performance of some hypothetical structures again illustrates the need for the screening of these predicted materials. A study by Bai et al. [221] into the purification of ethanol from fermentation broths found FER to also be promising for this separation, outperforming MFI which has seen extensive exploration in the literature. Use of a high throughput multi-tiered workflow allowed for efficient screening of leading structures to undergo further calculations; for the purification of ethanol from an ethanol/water solution, calculations at the lowest solution-phase concentration were used as the initial screen, with the performance metric being the product of selectivity and loading. Analysis of leading structures showed that the increased performance of FER for this separation was due to its ability to disfavor formation of hydrogen bonds from ethanol to water, preventing their co-adsorption; similarly the ATN structure is able to facilitate high selectivity at increased concentrations due to its narrow windows separating well-spaced ethanol adsorption sites. Although only IZA structures were screened for this process, application of high throughput analysis techniques to a wider data set, such as use of the similarity measures described earlier, could help identify promising predicted structures; moreover, techniques such as persistent homology could aid in the elucidation of structural features responsible for these properties. In the same study, the group devised a model for finding structures useful in the refining of oil: high affinities for long linear alkanes correlates to a high concentration of these molecules near active sites, while low affinities for branched chain alkanes promotes their desorption to prevent further cracking. A further performance measure was low selectivity between linear alkanes of different lengths to allow conversion of a broad range of different length chains. A similar tiered approach was taken, with simulations at the infinite-dilution limit and 573 K on all IZA structures, as well as the 330 000 thermodynamically accessible structures from the Deem database; the top candidates then underwent longer simulations under these same conditions, as well as at a fixed pressure of 3 MPa. Data from the IZA database set of structures readily identified current leading structures as the most promising, with a selection of hypothetical structures outperforming these by several orders of magnitude. Many structures with favorable adsorption characteristics were found to be in the range of 4.5 Å < LCD < 7.5 Å, with high selectivity for linear versus branched alkanes being best correlated to "pore bumpiness." Although some structural features were seen to have an impact on selectivity, the lack of clear correlation between simple geometric features and selectivity shows the need for the development and implementation of more high throughput analysis tools which are able to help identify structure-property relationships. Machine Learning We now briefly discuss the emerging role of machine learning (ML), distinguishing between applications focused on using simulation derived data and those where experimental data is used. Computational Data The identification of promising hypothetical nanoporous materials (e.g., see Section 3.3.1, Simon et al. [234] ) prompted a focus on efficient methods of evaluating the viability of synthesizing these structures and their characterization. The parallel evolution of ML and HT data acquisition techniques has facilitated the marriage of these methods to enable the screening of such databases for the massively combinatorial problems arising in nanoporous materials. Use of quantum mechanical data for the training of highly accurate predictive algorithms is a powerful tool offered by ML, allowing these models to reproduce DFT data to a high degree of accuracy on structures not in the training set. [213] Previously, DFT-derived data sets have been used for the derivation of force fields, [231][232][233] however the application of ML to these challenges offers huge benefits, due to the relative ease of training the models, as well as the ability for machine learning models to identify nonlinear trends. A recent highlight of the application of machine learning techniques trained on computational data includes the design of organic SDAs for the synthesis of zeolite beta by Daeyaert et al., [242] determination of the most thermodynamically favorable aluminum distribution in a range of zeolite topologies by Evans et al., [213] and the reliable calculation of anisotropic properties toward the discovery of auxetic zeolite frameworks by Gaillac et al. [243] Modeling of these features demonstrates the ability of machine learning to accelerate datadriven predictions that can help inform experiment, cementing their importance in the virtuous circle. Experimental Data The abundance of experimental data available for zeolites, most notably for synthesis, has given rise to a surge of interest in applying ML techniques to these data sets; recent developments in natural language processing have allowed for extraction of this experimental data from the literature, leading to insightful predictions that can help inform synthesis. The power of such techniques is highlighted in the work of Jensen et al. [244] who developed an automated data extraction pipeline which gathers data on synthesis and topology from both tables and text. This data set was then used to make predictions on the density of the zeolite that would be formed under different synthesis conditions. An account of the group's work was summarized by Moliner et al. [245] who demonstrate the capabilities and limitations of applying machine learning to such tasks; in their work the group illustrate the virtuous circle of continuous feedback between computation and experiment (see Figure 8), and importantly emphasize the need for standardization of data in computer-readable forms. In this work, the group applied machine learning techniques to address the "missing link" to a complete workflow for zeolite synthesis: tools that can extract and process large amounts of data from the literature, predict the formation of stable hypothetical frameworks, and the corresponding prediction of OSDA's that can facilitate their synthesis. A similar approach to Jensen et al. [244] was taken by Muraoka et al. [246] who collated experimental data in order to train an ML algorithm with structural and synthetic descriptors. Evaluating the importance of these descriptors allowed for the construction of a similarity network, which provided new insights into structural similarity between zeolite topologies. The synthesis of a family of zeolites which are structurally related (EEI-EUO-NES) was attempted using the SDA employed in the formation of IHW, a topology revealed by the similarity network to be structurally similar, but previously not considered part of this family. Successful synthesis of EUO using this SDA under the synthesis conditions for EEI confirmed the similarity between the structures that was identified in this study. Incorporation of hypothetical structures into this workflow could allow for key insights into their synthesis, facilitating the discovery of new materials. Although a powerful tool, data mining of published work relies immensely on the reproducibility of the data in question. Recent work in the domain of life sciences research has shown prevalence of irreproducible preclinical data, [247] calling in to question the legitimacy of extracting data from the published literature, although we note that ML could be potentially used to identify anomalous studies. Critical Remarks Many high throughput computational studies have been performed for the assessment of zeolites as media for separation and adsorption, and as catalysts. By their nature these studies cannot completely capture all physical factors that govern the performance of the material, and so assumptions are made to allow for their efficient analysis while minimizing the loss of accuracy. Common assumptions made are the neglect of framework flexibility, the random ordering of aluminum within aluminosilicate structures, and the derivation of transport properties through calculation of energetic bottlenecks. Through the development of more efficient algorithms and increased availability of computational resources, more realistic calculations will be possible, leading to large repositories of high-quality data. Continued development and use of the analytical tools available will lead to the identification of powerful descriptors, allowing for in-depth understanding of Figure 8. High throughput workflow for the computer-guided synthesis of nanoporous materials. Reproduced with permission. [245] Copyright 2017, American Chemical Society. structure-property relationships; this is exemplified in the work of Krishnapriyan et al. [236] where topological descriptors were computed by use of persistent homology, and their importance in governing the adsorption process investigated was assessed through use of machine learning. The combination of such analytical tools with machine learning will be key to unveiling the nonlinear relationships between structure and property. As the tools and architecture for high throughput computation develop, so must our abilities to effectively model real applications; in order to capture the full chemistry of a given process, proposed models must explicitly take into account all operating conditions. Exemplars of this include the studies described in Section 3.3.2, particularly the work of Lin et al. [239] in their determination of a novel metric, the parasitic energy, for describing the energy penalty imposed on a power plant by conducting carbon capture and sequestration. Cooperation between groups working on different classes of microporous materials is key to the continued development of this field; currently this is facilitated by initiatives such as the MGI, NOMAD, BIGmax, and MARVEL as mentioned in Section 1. Many tools and methodologies employed for the modeling and analysis of zeolites are directly applicable to MOFs and vice versa, hence cross-disciplinary collaboration will prove auspicious for the identification of new industry-leading materials. High Throughput Experimental Work on MOFs MOFs are another important category of porous materials. Unlike zeolites, the general structure of a MOF can be broken down into distinct inorganic and organic building blocks (respectively, termed the secondary building unit, or SBU, and the linker). There exist myriads of combinations of such components, imbuing the set of possible frameworks with incredible compositional and topological diversity. Some example systems are shown in Figure 9, and the reader is referred to reviews by Furukawa et al. and Yuan et al. for a broader introduction. [248,249] High porosity, well-defined metal centers, and almost endless possibilities for functionalization have attracted attention to these materials for commercial and industrial applications, ranging from gas storage and separation to catalysis. [250][251][252] It is worth noting that the field of MOFs is not as mature as that of zeolites, with the former having grown over only the last ≈20 years. Therefore, many of the HT techniques which have been used in MOF discovery and screening have previously been developed for and applied to zeolites, as described earlier in Sections 2 and 3. However, due to differences in chemistry, synthetic protocols, and target applications between the two classes of materials, these existing procedures have sometimes been applied in a different manner and been supplemented by new models and approaches, as will be discussed here and in Section 5. In experimental work involving MOFs, high throughput methods can be deployed at two stages. Synthesis can be accelerated by carrying out the dispensing of reagents and subsequent heating in a parallel manner, yielding a large number of samples in minimal time and with reduced human effort. Structural and other characterization may also be parallelized to rapidly identify crystalline products or evaluate properties of interest across multiple materials. In studies focused on the discovery of new MOFs, streamlining synthesis is more fruitful, while for work aiming to evaluate material performance at multiple condition-framework combinations, it is important to deploy suitable parallel instrumentation. In both cases, HT methods may need to be applied at only one stage to relax the workflow's bottleneck and accelerate the development of MOFs. This section introduces high throughput techniques used in MOF synthesis along with key studies which have made use of them. The number of HT articles published to date is not very extensive, but there have been enough successful experiments to affirm the utility of such methods in discovering new frameworks and tuning their properties. The following sections examine the synthetic and characterization stages, while also touching on the development of new parallel tools, the importance of feedback loops, and issues surrounding reliability and consistency. Preparation and Synthesis A key concern when synthesizing MOFs is sensitivity to reaction parameters, as even minor changes in conditions can lead to amorphous products or different phases; like zeolites, they are kinetic products. This is well exemplified by the cobalt succinate system, with a phase diagram shown in Figure 10. [253] Taking into account just two synthesis variables, the system exhibits 15 different product mixtures, with some regions of the diagram existing over relatively narrow bands. The complexity of phase space here makes it evident that control and consistency are paramount in achieving reproducible results, and any attempt to increase throughput should not neglect these key factors. This can be done by using automatic dispensers and multi-compartment reaction vessels, both of which were first developed for research on small pharmaceuticals over 40 years ago, though they have since been applied to zeolite synthesis (see Section 2.1). [3] In this manner, consistency can be maintained whilst minimizing manual involvement, in terms of both time and human effort. A simple and reliable way to streamline the dispensing of reagents is to use a programmable liquid handler. This approach is particularly powerful if the metal salts or precursors can reliably be prepared as stock solutions or are readily available in this form. This allows for the rapid testing of combinations of common linkers, solvents, and additives such as acidic modulators. Sonnauer et al. made use of a liquid dispenser to synthesize a chromium MOF with the MIL-101 structure in the presence of a dozen different solvents and liquid additives, with the presence of acetic acid proving important to product crystallinity. [254] More recently, Kelty et al. investigated the influence of reaction parameters on the synthesis of porphyrinic zirconium MOFs. [255] 1027 combinations were tested, encompassing seven different acidic modulators and solvents, ultimately yielding nanocrystalline forms of three existing frameworks. In the case that most reactants are readily available in liquid forms from suppliers, as was the case in both of these studies, programmable liquid dispensers allow phase space to be explored with limited human input beyond experiment design and characterization. The dosing of solids can also be done in an automated manner with solid dispensers. This allows for the simultaneous use of more metal precursors, low-solubility reagents, and linkers directly in powder form in a high throughput manner. For some synthetic routes, such as hydrothermal synthesis, the use of solids is necessary. Stock and Bein have developed an HT methodology which includes solid dispensing based on an automated dosing station. [256] This workflow has been adopted in the high throughput study of metal phosphonates synthesized via a hydrothermal route. Bauer et al. used this approach to carry out a synthetic screen of cobalt phosphonates using four different salts, which resulted in the identification of four new frameworks. [257] Bauer and Stock later examined combinations of a new phosphonate linker with six different divalent cations, from which they isolated four novel metal phosphonates. [258] Some research groups have also made use of more advanced robotic workstations that can handle both liquid and solid reagents without manual involvement. Sumida et al. have made use of an automated robotic dispensing module in research on an iron MOF for gas storage, although their synthesis was limited to two powder reactants. [259] The group of Martí-Gastaldo has also made use of a robotic dispenser to optimize the syntheses of new titanium frameworks. Padial et al. synthesized a new Ti MOF, MUV-11, using conventional methods, then refined the synthesis parameters via the high throughput use of a robotic workstation. [260] Crucially, this allowed them to screen numerous Ti precursors, both allowing them to establish the use of previously neglected Ti sources and optimizing sample crystallinity. This optimization is particularly important for titanium frameworks, given the tendency for amorphous byproducts to form during the syntheses of these systems. [261] The same group used this methodology to make a titanium analog of MIL-101 in a study by Castells-Gils et al. [262] Using a robot allowed the authors to first find a viable synthesis route for this MOF before screening multiple precursor-solvent combinations to increase sample crystallinity. In light of this experimental work, automated solid dispensing seems particularly well suited to studies where the precursor is varied to tune the synthetic route. When carrying out HT synthesis, it is necessary to choose appropriate vessels so as to allow for carrying out multiple reactions in parallel. The most common format is the multi-well plate, which typically contains 24, 48, or 96 individual compartments, and is suitable for hydrothermal or solvothermal synthesis if made of Teflon-lined steel. The use of multi-well plates for porous materials is well established with their application in zeolite synthesis described in Section 2.1. These were first employed in the context of MOFs by the groups of Bein and Stock to study the compositional phase diagrams of metal phosphonates through parallel hydrothermal synthesis. [256,263,264] These early studies often used a single plate and kept the number of microreactions to a minimum of 48 or 96. However, later work has often explored reaction parameters more broadly, using either larger or simply more plates, with the work of Banerjee et al. on ZIFs remaining the largest synthetic screen to date in this field. [22] In their study, 9600 different combinations of linker, solvent, metal source, temperature, stoichiometric ratios, and reaction times were evaluated using a hundred 96-well plates. Here the combination of automated liquid dispensers, only two metal precursors, and highly parallel solvothermal synthesis allowed for a fully systematic exploration of compositional phase space and the discovery of 16 new ZIFs. Once all reagents have been prepared and combined appropriately, heating and aging are required to complete the MOF synthesis process. Conventional heating can be used, but this can require long aging times and subjects all samples in the multi-well plate or reaction vessel to the same conditions. Temperature gradients can be effected across a single multi-well plate using the methodology developed by Bauer and Stock, which allows for a maximum differential of 40 K. [265] The group of Stock has also shown that microwave-assisted heating can be used in a high throughput manner. [266][267][268] The benefits of this method were made clear in Maniam and Stock's study on Ni-paddlewheel MOFs, where samples synthesized with microwave heating required less aging and showed higher purity than those resulting from conventional heating, both of which are ideal in a high throughput workflow. A further technique which can be used to decrease synthesis time and improve crystallinity in a parallel manner is ultrasonication. Schilling and Stock employed this approach to isolate four new metal phosphonocarboxylates, again noting the drastically reduced aging needed to yield crystalline products. [269] Note that for all these methods, a magnetic multi-stirrer can be used to ensure that the reaction mixtures remain homogenized. A downside of these alternate heating approaches is that they may not be compatible with standard multi-well plates; microwave vials and cell culture plates were used in the studies described above. Following heating and aging, it is necessary to isolate the products from solution (where microreactions have been successful). The two ways this is commonly done are filtering and centrifuging. [53] Individually doing this for hundreds of samples can be very time consuming, so tools have been developed to parallelize this process. Bauer et al. have used a custombuilt 48-tube apparatus that can both dry and wash products in a single step to examine cadmium phosphonates. [270] Plabst et al. have also described using a custom instrument to filter and wash products in an investigation of lanthanide phosphonates. [271] Some robotic platforms can also isolate solid products from multi-well plates in an automated manner; Sumida et al. did this to recrystallize samples for powder XRD (PXRD) in their study of an iron MOF. [259] Characterization and Properties High throughput synthetic screens result in large numbers of specimens which require further examination. Sometimes, a number of experiments can be discounted on the grounds of obviously non-crystalline products or failed reactions and the remainder can be characterized using conventional methods and instruments. In such cases, time-consuming techniques such as single-crystal X-ray diffraction (SCXRD) or electron microscopy (for which no straightforward parallelization schemes are currently known) can be used without significantly throttling the study. However, if this is not the case, then more streamlined approaches with higher throughput must be relied on at least for identifying samples for further analysis, with routines such as powder PXRD being more suitable. The last 10 years have also seen the development of new tools and the acceleration of previously slow methodologies, particularly with regard to adsorption properties. PXRD is currently the most widely applied characterization technique in the context of high throughput experiments on MOFs. [53] It is mainly used to determine the crystallinity of specimens and the presence of possible by-products, making it suitable for rapidly pruning the results of an initial screen. The method is applied in a fast serial manner with an automated instrument containing a movable xy stage capable of holding multiple samples. Issues can arise when mixed or unknown products are present, as this can lead to patterns that overlap or which cannot be automatically assigned, complicating phase identification. This has also been a concern in zeolite HT synthesis (see Section 2.2), [84] and remains an active area of research for MOFs. [272] An alternative approach is to use an optical microscope to visually determine when reactions have been successful, as was done by Banerjee et al. for their 9200-sample study of ZIFs. [22] A representative set of microscopy images taken this way is shown in Figure 11. The different panels show that crystalline products can be found using this method with little ambiguity, given that well-separated millimetre-scale crystals can be identified in each. Sumida et al. also used this method in their study on iron MOFs, where in this case, optical capabilities were integrated in the high throughput robotic workstation. [259] There have been attempts to supplement PXRD and optical microscopy with new methods for rapidly identifying crystalline porous materials. The Kaskel group has developed a tool which can rapidly measure the heat of adsorption of butane in samples via optical methods to ascertain porosity. [103] They further showed that, by comparison with results from well-calibrated reference materials, the sensor readings could be used to quickly determine gravimetric butane capacities and BET surface areas. In later work, they extended this to also work with other gases such as cyclohexane, CO 2 , and H 2 O. [104] This approach requires materials which have already been activated, a process which is typically time consuming and leads to poor properties if done improperly. [273] The Long group attempted to bypass this problem by formulating a new heating protocol for use with existing thermogravimetric analysis (TGA) machines. [274] This approach, compatible with small samples from HT synthetic screens, can rapidly identify porous samples and identify optimal activation conditions using multiple short heating-gas-adsorption cycles. However, note that some frameworks require more sophisticated approaches, such as solvent exchange or supercritical CO 2 drying, to reach optimal porosity. [275] Nuclear magnetic resonance (NMR) has also been proposed by Chen et al. for the purpose of swiftly identifying porous products, as the sample isolation and activation steps are replaced by solvent exchange. [276] The authors have demonstrated a strong correlation between BET surface area and transverse relaxation of spin in solvent atoms for a series of established frameworks and zeolites, noting also that by using autosampling hardware NMR can be deployed in a HT fashion. A set of measurements that is routinely collected for MOFs is those related to gas uptake and selective adsorption. However, standard protocols are too time consuming to be used in a high throughput fashion. To remedy this, Han et al. developed a piece of apparatus capable of recording the gas uptake of 36 samples concurrently. [277,278] This allowed them to evaluate 16 frameworks over two studies for potential in carbon capture, from which they found that structures with nicotinate linkers showed a combination of good CO 2 selectivity and uptake along with stability toward moisture and acidic gases. Wiersum et al. also developed a new instrument capable of measuring adsorption properties in parallel. [279] Their approach is better suited to generating full isotherms, which they did for a few simple gases in functionalized MIL-100 and CAU-10 to examine the impacts of metal centre and linker moiety on selectivity and uptake. A different approach was used by Mason et al. to rank frameworks for carbon capture, as they instead developed an instrument capable of estimating multi-component adsorption for multiple samples in parallel at equilibrium. [166] This is faster than conventional breakthrough experiments and also avoids some of the shortcomings of this latter technique associated with non-equilibrium, so the authors argue that this is more chemically sound. With this machine, they tested 15 porous materials under flue gas conditions, finding that alkylamine moieties provide good selectivity and uptake for CO 2 while maintaining stability toward moisture. Important features to consider for MOFs in the scope of industrial or commercial applications are thermal and water stability. While a good fraction of the reported MOFs are reasonably thermally stable, the number of water stable systems is remarkably small. Stability can be evaluated through gas adsorption experiments or by boiling samples, as reviewed by Burtch et al. [280] However, for high throughput screens which yield many promising hits, measuring thermal and water stability is costly and time consuming. Hydrothermal stability was estimated in the studies by Han, Wiersum, and Mason through gas uptake measurements both prior to and after exposure to moisture, [166,277,279] so methodologies exist for evaluating this property in a high throughput manner. However, note that these experiments were limited to flue gas conditions, where temperature is relatively low (20-40 °C) and water content is fixed. A more systematic approach was developed by Low et al. for mapping hydrothermal stability. [281] In a combined experimental and computational study, the authors built a custom 48-chamber steaming instrument to examine ten different frameworks at temperatures up to 300 °C and under gas mixtures with up to 50% steam. This allowed them to build a comprehensive "steam stability map" and infer that the strength of the metal-linker bond is an important variable for hydrothermal stability. Most of the subsequent high throughput water and thermal stability work on MOFs has been computational in Figure 11. Optical microscopy images of successful ZIF microreactions; crystal sizes are of the order of 0.1-1 mm. Reproduced with permission. [22] Copyright 2008, AAAS. nature, but more recent experimental work by Fischer et al. on zeolites (see Section 2.2) has relied on another multi-compartment piece of apparatus with similar capabilities to that of Low et al. [107,108] A wider mapping of hydrothermal stability in MOFs would be invaluable to their development and potentially be a crucial element of the screening process. Much research has been devoted to MOFs as drug delivery vehicles and biocompatible platforms. [282][283][284] In order for such applications to be realized, it must be shown that promising frameworks are non-toxic to human cells. To this end, there have been numerous studies which have made use of dye-based assays combined with high throughput UV-vis spectroscopy to evaluate MOF cytotoxicity. The scheme used in such experiments is similar to that used for zeolites (see Section 2.3): some substance is exposed to a sample of MOF, time is allowed for potential catalytic or degradation processes to occur, and the subsequent response to UV-vis irradiation is recorded. An early study by Horcajada et al. examined 6 iron MIL-type frameworks for drug delivery and medical imaging. [183] In order to evaluate toxicity to cell lines in vitro, the authors ran assays in which mouse macrophages were exposed to varying concentrations of MOFs. They subsequently used the UV-vis active MTT dye to measure the impact on the mouse cells, [285] finding from this that their target materials were indeed safe for biological uses. A later study by Ruyra et al., using the XTT dye, [286] focused on 16 frameworks representing a range of metals. [287] From this, they found that cytotoxicity was often associated with bare metal ions from framework degradation. One further use for UV-vis spectroscopy in high throughput experiments on MOFs has been for evaluating catalysis. The method has been deployed by the Cohen group to measure the catalytic breakdown of nerve gas simulants by porous materials. Palomba et al. developed a statistically robust assay system for evaluating the degradation of dimethyl 4-nitrophenol via a parallel UV-vis spectrometer. [288] In their screen of 96 porous materials, they found that Zr-MOFs such as UiO-66 outperformed the rest. This led them to repeat their experiments using 26 different multivariate UiO-66 samples, from which they found that mixed ligands improved the catalytic breakdown of nerve gas simulants. [289] Further work on a different simulant, involving a screen of 117 frameworks and zeolites, yielded concordant results regarding the high activity of UiO-66 and its multivariate forms. [290] The authors have also noted that this screening method for catalytic activity could be applied for other reactions. Given the potential of MOFs as catalysts in various reaction archetypes, this approach may see more use going forward for high throughput catalysis measurements. [291] Exploiting Feedback Loops In an HT screen, the marginal costs of testing additional reaction conditions are low. This can encourage the use of HT techniques as a brute-force method for exploring phase space, and as noted by Plabst and Bein, this is especially powerful when a synthetic strategy or chemical knowledge of the system is unavailable. [292] However, it can remain fruitful to carry out syntheses in batches, deriving insight into what reaction parameters most influence the system, before adjusting and proceeding with the next batch in a feedback process. In a study on metal phosphonates, Maniam et al. used feedback-informed multi-stage screening to discover new copper frameworks and refine their formation fields. [293] Synthesis was carried out in three batches, shown in Figure 12, initially encompassing a large range of conditions. Following batch A, progressively more focused batches B and C were explored, eventually leading to very tight stability regions for three of the compounds discovered. In this case, feedback from earlier screens has allowed for a focused study of target MOFs without the time-consuming and wasteful testing of phase space which yields no products. Recent work by the Smit group has taken this approach a step further using a computational model that captures chemical inferences from successful and failed reactions. In a proof-of-concept study, Moosavi et al. showed that by using an appropriate measure of reaction success, a genetic algorithm (GA) could be trained to refine synthesis in the same way that a chemist would using chemical knowledge from the results of experiments. [294] The authors tested this technique on HKUST-1 using BET surface area as a measure of success, and parametrized the reaction conditions in terms of nine variables. By using the MaxMin method, [295] the 1000 most diverse combinations of synthesis parameters were generated, the top 30 of which were used in an HT synthetic screen. This is outlined in Figure 13. Following the first round of synthesis, the sample surface areas were measured, and the results were fed into the GA to generate 30 new sets of parameters (G-2). This process was repeated once more to reach a third generation (G-3), after which the authors elected to stop the procedure as a resulting sample was found to have the highest BET surface area reported to date for this material. It is clear from Figure 13 that, as the GA proceeds, synthesis conditions are converging and reaching some optimum point. Follow-up analysis of the data using machine learning allowed weights, reflecting impact on surface area, to be assigned to each synthesis variable. The screening was repeated from scratch for the zinc analog of HKUST-1, but using the parameter weights determined from Cu-HKUST-1, and this yielded crystalline samples within a single generation. This powerful approach illustrates not only the role of feedback from prior screens to inform subsequent screens, but also the value of failed or partially successful reactions. There have been other attempts in materials chemistry to guide synthesis using automated equipment and algorithms capable of learning, such as the Cronin group's "chemputer" approach or the work of Raccuglia et al. [296,297] More recently, Chen et al. used post hoc machine learning to analyze synthesis and characterization data from experiments on UiO-67, from which they were able to infer the relative importance of reaction parameters for crystal growth. [298] Nevertheless, Moosavi et al.'s work remains exemplary, having elegantly combined a model which learns on-the-fly with high throughput techniques for exploring and optimizing MOF synthesis. Given that this approach can determine synthetic optima without prior chemical knowledge and with minimal human involvement, it may be a powerful potential partner to the computational methods discussed in Section 5; screening identifies target frameworks and the synthetic GA realizes them in an almost wholly automated and integrated process. Reliability and Consistency The use of automated dispensers, robots, and high throughput instruments can increase the consistency of experiments, the former two having been found to be more reliable than manual methods. [299] Nevertheless, it remains important to evaluate the consistency of methodologies and collect repeats when possible, as this can partially fulfil the role of validation and allow for the estimation of errors. Given MOF sensitivity to reaction conditions, work centred on synthesis could benefit greatly from repeat runs. Despite the lower relative cost of doing this in high throughput studies, this is seldom done. Work by Biemmi et al. on reaction parameters for MOF-5 and HKUST-1 remains the main example of where this has been done. [300] The authors repeated some of their syntheses for both frameworks, also using different positions in their multi-well plates to rule out spatial differences across their reaction vessels. This works as both an assessment of the HT method's reliability and as verification that the same conditions lead to the same products. The latter point is particularly important, as it has been found that downscaling large-scale reactions to the micro-scale sometimes leads to amorphous results or no reaction, the reasons for which are not well understood. [256,301] It is more common to repeat runs when measuring sample properties, namely those related to gas adsorption or catalysis. In their study on the sorption of CO 2 from flue gas, the Sholl group made use of their new parallel instrument to record uptakes in triplicate. This has also allowed them to estimate the errors in their measurements, explaining most of the variation seen in the results for adsorption prior to and after exposure to moisture and acid gases. Given that experimental gas adsorption isotherms for MOFs often lack reproducibility, this represents a best practice that both simplifies data analysis and Figure 12. Copper phosphonate discovery and focused synthesis arrays. The process of synthesis focusing on a narrow part of phase space in going from array A to array C is made clear in this diagram. H 3 L refers to 4-phosphonobenzenesulfonic acid; molar ratio values have been normalized to 1, and products have been identified with PXRD. Reproduced with permission. [293] Copyright 2010, Wiley-VCH. increases trust in the data. [302] Moreover, such concerns present a strong case for establishing reference MOF samples, in order that adsorption measurements can be verified, as has recently been achieved in the zeolite field. [303] The work of Palomba et al. on nerve gas degradation using porous materials also excels in its thoroughness with repeat measurements. In this case the authors first assessed the viability of their assay of catalytic activity by means of a statistical method known as the Z-factor on a subset of their target MOFs. [304] After having determined that their level of noise was acceptable, they proceeded to take every measurement in triplicate, and were thereafter able to make robust conclusions on key framework features for this catalytic process. HT methods make repeat measurements relatively low cost and, as exemplified by these studies, this duplication of results allows for a quantification of noise and increases reliability. Critical Remarks To date, there has only been a relatively limited number of high throughput experimental studies focused on MOFs. There are several reasons for this, all of which reflect the weakness of high-volume approaches in this part of MOF chemistry. A prohibitively large range of different metal precursors, solvents, and linkers are currently in use, not all of which can be explored in a single screen. Although liquid dispensers can be used when all reagents are available in solution form, this still requires preparing a different stock solution for each precursor-solvent combination. Hence, experiments tend to be targeted at optimizing a single framework (e.g., the work of Sonnauer et al. on MIL-101) or toward material discovery using limited metal sources (Banerjee et al.'s 9200-sample screen used only two precursors). Using an automated solid dispenser or robot may help alleviate this problem (when investigating the synthesis of MUV-11, the group of Martí-Gastaldo examined seven different precursors), but concerns remain regarding inconsistent performance amongst powder types, non-negligible errors at small masses, and rates of machine stall. [305] For both solid and liquid automated dispensers, high equipment cost may also remain a barrier to entry. Additionally, if bespoke starting materials (such as an exotic linker or precursor) are required, these need to be synthesized and functionalized prior to any experiment. Further concerns can include long reaction times and the lengthy procedures of washing and activation to remove solvent and impurities from the products, though this is highly dependent on the method of synthesis. [273,306] Some groups have developed ways of accelerating this step, but these practices have not yet become widespread as they often involve custom-made equipment. The same issues are also pervasive in property-based screening studies. In order to measure adsorption or catalytic properties across a large number of porous networks, they must first be synthesized. This again limits the sample set to either a limited number of frameworks, or variations of a handful of syntheses at once. Following this, instruments for high throughput characterization must also be available. Some techniques are currently not amenable to parallelization, such as SCXRD and electron diffraction, though ongoing efforts may eventually streamline such methods sufficiently for HT use. [191,194] In some cases authors must develop entirely new equipment in order to carry out their screening, which further requires initial benchmarking and validation (as discussed in Section 4.2). The synthesis stage will be limited by the number of precursors, the presence of solid reagents, and the reaction method, but will be fast if these parameters are carefully chosen. Subsequent washing and activation (particularly important if adsorption properties are of interest) are also likely bottlenecks. The results of an initial screen, if unsatisfactory, may also prompt secondary screens. Characterization is rapid if the methods chosen can make use of small samples (on the milligram scale), otherwise scale-up and instrument set-up will require manual involvement. The commercial availability of porous materials, such as activated carbon or certain zeolites, can accelerate a study, though these are usually used as reference materials or for calibration purposes. Despite these weaknesses, it has been shown that high throughput methodologies are effective in MOF experimental work. Parallel synthesis is a powerful approach for studying a single framework at a time, particularly when the aim is to optimize conditions so as to tune a given material property. Automated dispensing techniques and robotics may also pair very well with computational screens, as the use of hypothetical databases can lead to the identification of promising networks which have not yet been synthesized. In such cases, as shown by the work in Section 4.3, even without a chemical starting point from which a viable synthesis could be derived, a modelbacked approach could quickly yield crystalline samples. Even when materials discovery is the objective, the isoreticular chemistry of many MOFs means that the combination of a small number of precursors and numerous linkers can still lead to a substantial array of new frameworks. Machine learning and GA methods have been sparingly used in MOF synthesis, but the success of Moosavi et al.'s HT study, along with similar developments in other fields, may drive their more widespread adoption for these materials. Indeed, the most successful high throughput MOF studies of the future are likely to be those involving parallelization and streamlining at every stage, from computational screening to rapid synthesis and characterization, powered by feedback loops that enable more targeted secondary screens. Computational Screening of Metal-Organic Frameworks The combinatorial nature of MOFs, constructed by selfassembly of inorganic nodes and organic linkers, makes them exciting materials due to the tunability of their chemical composition and structure. However, it also poses a significant challenge to modeling approaches as the search space of all possible frameworks is intractably large. The ever-increasing power of high performance computing and the creation of large repositories of experimentally determined and/or computergenerated structures has led to a paradigm shift of in silico materials science over the last decade. HTCS has emerged as an invaluable asset to the scientific community, allowing fast and accurate property prediction of up to hundreds of thousands of structures, expediting the time frame between hypothesis and discovery. We note that due to the diversity of potential applications of MOFs, HTCS can occur at different stages of a workflow. We also note that we have to take a flexible definition of "high throughput" in the context of computer simulation. Screening based on geometric properties of MOFs can take fractions of a second for a single MOF (or zeolite) to be examined; classical approaches (FF) involving geometry optimization, Monte Carlo/molecular dynamics would typically be minutes to hours timescale per MOF and ab initio/ DFT geometry optimization, Monte Carlo/molecular dynamics could be tens to hundreds of hours. Such computational costs imply a practical limitation to the number of calculations that can be performed on a tractable timescale. Depending on the application, potentially all three methods may be used as part of a hierarchical sift, but each one has associated limitations. In this section we highlight some of the most critical aspects in the HTCS of MOFs: different parts of the screening process, including the construction and mining of structural databases, and the limitations of classical force fields (transferability and assignment of charges). These issues are then discussed in the context of adsorption and separation applications, the most heavily studied potential uses of MOFs. Databases HTCS of MOFs usually necessitates parsing thousands of crystal structures to identify frameworks for use in a particular application; this requires large repositories of MOF structures to be available. There are a plethora of available databases to choose from, comprising either experimentally determined or computer-generated frameworks. Each of these repositories come with their own set of limitations, such as: differences in the algorithms used for solvent removal and treatment of charge compensating ions in the experimentally reported structures, and the synthetic feasibility and restricted topological diversity of hypothetical frameworks. This poses the question of how to identify the relevant database for a certain application, which we now discuss. CSD-Derived Databases Newly synthesized frameworks typically have their structures deposited in the Cambridge structural database (CSD) (https://www.ccdc.cam.ac.uk), amongst hundreds of thousands of crystal structures, with no label identifying them as MOFs, leaving them as needles in a haystack. To remedy this, methods have been developed for identifying, extracting, and readying frameworks from the CSD for visualization or simulation. Search criteria are employed to identify MOFs and postextraction approaches focus on whether to remove bound or unbound solvent and algorithms to treat disorder. Watanabe and Sholl sought to identify structures for applications in the high throughput screening of CO 2 /N 2 separation. To this end, they extracted and screened ≈30 000 MOFs from the CSD, leading to 1163 frameworks being examined for this separation process. [307] Further to this, Goldsmith et al. published an automated approach for screening 20 000 frameworks from the Cambridge repository for their use in hydrogen storage applications. [308] In 2014, and more recently 2019, Chung and coworkers published a database of MOFs tailored to contain those for use in adsorption processes. [309,310] The construction process of these Computation Ready, Experimental Metal-Organic Framework (CoREMOF) databases consisted of extracting only porous networks with 3D channel structures and PLDs greater than 2.4 Å; the most current CoREMOF database includes ≈14 000 structures and more than 350 unique topologies. Additional computational data has been added to the database, such as the work by Nazarian et al. who published ab initio derived point charges for ≈2900 structures in the CoREMOF database. [311] Whilst development of these methods aids researchers in the identification of structures for study, they are external to the CSD and require manual updating for identification of new porous structures as they are deposited. Recent work by Moghadamm et al. sought to develop an integrated MOF subset of the CSD; this was done by establishing a set of seven "look for MOF" search criteria implemented in a custom CSD Python Application Programming Interface (API) workflow. [312] Analysis showed that disorder was present in a number of the identified structures, leading to the creation of the CSD nondisordered MOF subset. These two lists are integrated into the CSD's structure search software, ConQuest, [313] and currently contain over 90 000 entries; where the subsets are automatically updated with newly deposited structures. These studies all parse the Cambridge repository with search criteria based on atomic types and bonds present, and remove or manipulate structures that contain disorder intrinsic to the frameworks; a disparity between them is the treatment of charge compensating ions and the removal of solvent bound to the structure (seen in Table 2). The approximations made are valid on a case-by-case basis and choosing the appropriate database, or automated workflow, is dependant on the application under study. Examples include the CoREMOF [309] treatment of bound solvent, where all identified solvent molecules are removed even if this results in undercoordinated metal sites. This is a reasonable approximation if the process that the simulation seeks to model occurs at high temperature or under nonhumid conditions. Alternatively, processes at low temperature and/or under humid conditions are likely to see water vapor bind within the MOF pore, altering the adsorption properties of the framework and consequently discrepancies between simulation and experiment are likely to arise. Computer-Generated Databases Aside from the CSD-derived databases, there is a set of repositories containing computer-generated frameworks. In 2011, Wilmer et al. constructed 137 953 hypothetical MOFs (hMOFs) using crystal enumeration algorithms and a library of 102 building units (BU) extrapolated from existing crystallographic data; [314] this has been dubbed the hMOF database. The BUs consisted of 5 metal clusters, 42 linkers (terminated with either nitrogen atoms or carboxylic acid groups), and 13 functional groups. Each structure was generated with no more than four unique BUs and no consideration was taken for post-construction optimization. In order to validate the hypothetical frameworks, methane storage performance was computed and compared between a subset of the constructed MOFs, their energetically relaxed counterparts and their experimentally reported analogs; once validated the entire database was screened for high-pressure room-temperature methane adsorption. In a similar vein, Aghaji and co-workers generated 324 500 frameworks from a library of 90 BUs, [315] comprising 70 inorganic/organic building units and 20 functional groups. The authors excluded interpenetrating structures from their construction process and functionalized linkers only in symmetric hydrogen positions to enhance their synthetic viability. The generated structures were then energetically relaxed with the universal force field [316] and screened for their CO 2 /CH 4 sorption selectivity. Colón and co-workers implemented a different approach for constructing hypothetical frameworks. As opposed to the bottom-up crystal enumeration used in the aforementioned studies, the authors used a reverse topological approach (RTA) [317] to generate over 13 500 hypothetical MOF structures with 41 different edge-transitive topologies. Their implementation of this approach, where constituent building units are mapped onto topological blueprints in a "top-down" fashion, has been published in the ToBaCCo code; [318] the structures' use has been demonstrated by assessing them for their hydrogen and methane storage capabilities, as well as their Xe/Kr sorption selectivity. Computational Methods The calculation of textural, topological, and adsorption properties underpins HTCS procedures. For an overview of the methods used, we refer the reader to Sections 3.2.1, 3.2.2, and 3.2.3 where geometric and topological descriptors, the hierarchy of simulation methods, and the validity of some common approximations are discussed. The important input parameters when computing adsorption properties with classical approaches (such as GCMC and MD) are the force fields that describe both the framework atoms and the guest molecules. In standard molecular mechanics type force fields, there are bonding terms and non-bonding terms; the latter are typically partitioned into the electrostatic terms and van der Waals terms. It is common practice to simulate MOFs as rigid, [314,315,319] taking the van der Waals parameters from generic force fields, such as the Universal [316] and Dreiding [320] force field. The issue of assigning partial charges is more complex and is discussed in Section 5.3. Adsorbate molecules are often described by empirical force fields fit to match the vapor/liquid coexistence properties of the molecule; [321] however, when these schemes prove to be inadequate, first principle-derived force fields [322,323] or corrections to account for quantum effects may be deployed. [324,325] Notably, Verploegh et al. [326] have derived an enhanced force field for assessing the diffusion of small molecules in MOFs (the specific study was on ZIF structures). The force field was fit to DFT data and gives similar quality of forces to those extracted from periodic ab initio MD studies. Hence, the reported force field lends itself to HTCS of either single or multi-component studies of diffusion and breakthrough measurements, as well as competitive diffusion which is relevant to membrane separation applications. Charge Assignment One of the most significant challenges to developing transferable force fields for MOFs is the issue of charge. MOFs can accommodate, in practice, almost all elements of the periodic table. In zeolites, the framework nature of the materials coupled with their semi-ionic nature and relatively limited atom types means that the parameterization is intrinsically quite tightly constrained. In MOFs there is the possibility of undercoordinated transition metals (exhibiting unusual Jahn-Teller distortions) in varied oxidation states, within the same framework. Added to the mix is the organic linker which can display varied degrees of charge transfer depending on the oxidation state of the metal. This complexity clearly pushes the limitations of fixed charge force fields, although adaptive force fields such as ReaxFF [327] might be promising if adequately trained on ab initio data, for example. The difficulty of finding a method that can accurately model charge distributions in a wide variety of chemical and structural compositions means that assigning these charges is an onerous and hazardous task. Whilst charges can be assigned through DFT-derived partial charges, for example, [328][329][330] transferability is important and these methods require compute intensive calculations for each framework being considered. It becomes unfeasible to perform calculations on samples consisting of tens or hundreds of thousands of structures. To remedy this, Zhong and co-workers developed a connectivity-based atom contribution method (CBAC) for fast assignment of partial atomic charges; in this approach it is assumed that atoms with the same bonding connectivity have identical charges across different MOFs. [331,332] The authors computed CO 2 adsorption isotherms using the CBAC method on a small test set of frameworks and good correlation to those calculated with ab initio derived charges was observed. Other methods have been developed by building on the work of Rappe et al., who developed a charge equilibration scheme (QEq) for predicting charge distributions in molecules. [333] QEq assigns charges that are based on the molecular geometry of the system and the experimentally determined atomic ionization potential, electron affinities, and atomic radii. Several groups, such as that of Wilmer and co-workers, have built upon the work of Rappe et al. by developing an extended charge equilibration method (EQEq), taking into account all measured ionization energies for every atom in the periodic table. [334] Moreover, Kadantsev et al. parameterized the QEq method (MEPO-QEq) to reproduce ab initio derived electrostatic potentials in a diverse training set of 543 MOFs. [335] Using a test set of 693 structures, the parameterization was validated by means of cross-comparison between CO 2 uptake and heats of adsorption calculated with charges from the MEPO-QEq method and those derived from DFT. [329] The influence of charge partitioning schemes on simulation results can be seen in Figure 14, where the data produced using different variants of the QEq method is compared to data computed with ab initio derived charges. Applications In this section we highlight HTCS procedures applied to either computer generated or experimentally reported MOFs for the identification of high performance candidates. In particular, we emphasize studies that broaden our knowledge of combinatorial MOF space, restrict the search space necessary for candidate identification using well-informed filters, and present key descriptors for optimizing material properties. Moreover, we will discuss limitations of some of the current methodology and summarize the rate determining steps in these screening procedures. DFT Virtual Screening High throughput DFT is not commonplace on MOF structures, owing to their structural complexity, diverse compositions, and large unit cells, often containing hundreds of atoms. Hence, DFT is yet to be routinely performed over sample sizes consisting of tens or hundreds of thousands of MOF structures. Early work has been conducted in this field on two pertinent aspects of MOFs, hydrothermal stability and the competitive adsorption of water. Low et al. tackled the issues of MOF hydrothermal stability by devising a cluster model approach, where linking ligands were replaced with capping species containing the same functional group bound to the metal. [281] This model was applied to eight experimentally realized frameworks and DFT was used to compute energies associated with hydrolysis and ligand displacement. When this data was compared with experiment data it was found that the activation energies of ligand displacement served as a useful approximation of the relative water stability of the small sample of MOFs used in their study. Further to this, Capena et al. investigated the effects of searching composition space, with respect to MOF constituent metals, on the adsorption of H 2 O, CO 2 , CH 4 , and H 2 . [336] The authors focused on MOF-74 as it is known to have a high density of unsaturated metal sites, which have been shown both experimentally and computationally to interact strongly with various adsorbates. [337,338] The group proceeded to substitute Zn-MOF-74 with 25 different metals and subsequently optimized the MOF-74 analogs. The outcome of the DFT calculations led to the discovery of five M-MOF-74 structures, where M=Rh, Pd, Os, Ir, or Pt, that showed preferential binding of CO 2 over H 2 O. In a more recent study, Rosen and co-workers published a fully automated periodic DFT workflow for assessing optimal MOF candidates for use in catalytic processes. [339] As a proof-of-concept the authors applied their procedure to screen MOFs containing unsaturated metal sites from the CoREMOF [309] database for their use in the oxidative C-H bond activation of methane. Hydrogen Storage Alternatively, classical molecular simulation has proven to be an invaluable tool for property prediction, and when used with structure databases it allows HT investigation of the physicochemical properties that influence adsorption in porous materials. Inspired by the prospect of MOFs as hydrogen fuel delivery media, one of the most widely studied phenomena is the adsorption of molecular hydrogen. [340] In attempts to optimize hydrogen storage and deliverable capacity by increasing the MOF-H 2 heat of adsorption through functionalization, Colón and co-workers used a bottom-up approach to generate over 18 000 MOFs and porous aromatic frameworks (PAFs) composed of linkers functionalized with various numbers of magnesium alkoxide sites. [341] GCMC was deployed to screen the structures for their hydrogen deliverable capacities at 243 K and a pressure swing between 100 and 2 bar. Since generic force fields are not sufficiently accurate to model the strong interaction of H 2 with Mg, a first principles derived force field was employed to model this phenomenon. The authors demonstrated that there is a fundamental limitation that prevents a structure from having large volumetric and gravimetric deliverable capacities simultaneously. Optimal gravimetric deliverable capacity was exhibited in structures with low framework density and insertion of relatively heavy Mg sites did not improve this property. On the other hand, optimal volumetric deliverable capacity was exhibited in structures with a balance between void fraction and material density. Furthermore, work by Sikora et al. in analyzing the hMOF database [314] showed that the chosen building units of this bottom-up approach produced only six topologies, where the majority were the primitive cubic unit (pcu) net (over 90% of MOFs in the database). [342] Gómez-Gualdrón, Snurr, and coworkers took note of this work and examined geometric dependencies of hydrogen storage in a topologically diverse sample of over 13 500 hMOF structures, generated using an RTA, [317] and assessing them for their deliverable capacity between 100 bar/77 K and 5 bar/160 K. [343] The authors found that volumetric and gravimetric deliverable capacities were inversely related, where this was linked to the topologically dependant trade-off between volumetric and gravimetric surface areas. [344] It was discovered that different topologies reach a maximum volumetric deliverable capacity at different linker lengths. Since topology inherently captures additional spatial information on the relationship between the local geometric features given by textural properties, it is desirable to use framework topology in tandem with textural descriptors as design variables for novel high performance frameworks. Further to this, the group validated their automated MOF construction process by successfully synthesizing four "she" topology frameworks; comparison of the empirically determined and simulated PXRD patterns of these structures showed the computational predictions were consistent with experimental observations. In more recent work, Siegel and co-workers conducted the largest screening of MOFs for hydrogen storage to date. [345] Half a million frameworks were collated from 11 published databases, including CSD-derived and computer-generated structures. In an effort to offset the temporal cost associated with brute-force screening of their entire sample, the semiempirical Chahine rule [308] was used to estimate total gravimetric and volumetric deliverable capacities of each structure, refining the search space of their aggregated repository to 43 777 frameworks. Subsequently, the frameworks underwent further evaluation by GCMC simulation to compute deliverable capacities of each framework at cryogenic operating conditions. The pseudo-Feynman-Hibbs model for H 2 was used to account for quantum effects that are expected to be significant at low temperature. The authors used the record holder for hydrogen balanced storage capacity, IRMOF-20, as a benchmark; this yielded 102 CSD-derived frameworks and 5957 hypothetical structures that exceeded their benchmark in usable pressure swing deliverable capacities. In order to empirically verify their screening procedure, two real MOFs and one hypothetical structure were synthesized and tested. The resulting experimental data was in good accord with the simulated adsorption isotherms. Moreover, analysis of the full data set showed a theoretical usable volumetric capacity ceiling at ≈40 g-H 2 L −1 , highlighting a need for the design of new frameworks with respect to high volumetric usable capacity. The difference in hydrogen deliverable capacities between experimentally determined and computer-generated MOFs can be seen on inspection of Figure 15. Functional groups are explicitly considered as building units in the construction of some hypothetical databases, [314,315] meaning their may be higher numbers of functionalized hypothetical structures than pristine frameworks. This possibility coupled with the limited Figure 15. For MOFs in the 11 databases used by Siegel et al., [345] (A) shows their hydrogen volumetric and gravimetric deliverable capacities and (B) displays the probability distribution of usable volumetric deliverable capacities of the real and hypothetical frameworks. A,B) Reproduced under the terms of the CC-BY Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0). [345] Copyright 2019, The Authors, published by Springer Nature. number of framework topologies [342] and metal clusters present in the hMOF repositories is seen to be beneficial for this application. However, the restricted topological diversity in particular hinders the transferability of these hypothetical frameworks to shape-selective applications, such as catalysis. Further work on top-down approaches to increase the size, topological, and structural distributions of these databases will allow HT structure-property information to be produced for a wider range of applications over larger sample sizes. [318] Moreover, the data produced by studies such as this could benefit from scrupulous analysis by means of machine learning algorithms (discussed in Section 5.5). This could aid in the identification of optimal framework property combinations in unexplored regimes, which could constitute design criteria for new high performance structures. However, depending on the repository used in data generation the statistics produced may be biased toward the dominant topology present. Methane Storage Methane is another important gas fuel and was heavily studied at the beginning of the last decade, motivated by applications such as natural gas vehicles. [346] The pioneering work by Wilmer et al. in the automated construction of hypothetical MOF structures (and the hMOF database), which were subsequently analyzed for their methane storage performance, [314] has been further built upon by Gómez-Gualdrón and co-workers. [347] Approximately 120 pcu MOFs were taken from the hMOF database, supplemented by 39 idealized carbonbased porous materials, and using GCMC the authors explored the limits of methane deliverable capacity. An investigation into the effects of manipulating the well depth, ε, of their Lennard-Jones parameters describing the framework was conducted. This was done by multiplying ε by 2 and 4 to homogeneously increase MOF-methane interaction strength in order to approximate functionalization that maintained structural characteristics. From this, several structure-properties relationships were identified, it was found that at higher values of ε: i) the lower bound of the range of volumetric surface area necessary for optimal deliverable capacity decreased and ii) the optimal pore size range shifted toward larger pores, suggesting that functionalization only increases deliverable capacity past a minimum pore size threshold. Further to this, the authors derived an analytical equation, based on MOF and methane properties, that could successfully predict GCMC-simulated methane deliverable capacities for 95% of their repository with an error of 50 cm 3 (STP)cm −3 . [348] Whilst not a limitation-free metric, analytical predictions such as these are effective descriptors for reducing the search space needed for the identification of high performance candidates. The methods of database construction mentioned in the studies above are based on known building units extrapolated from crystallographic data; [314,315] these techniques rely on exploration of MOF space using libraries of pre-existing linker species. This is a limitation of the existing databases as sampling of composition space is restricted by the input of pre-defined building units (often commodity chemicals). This has been addressed by Bao et al. who developed a de novo evolutionary algorithm to explore the combinatorial space of linker molecules in order to optimize methane deliverable capacity in predicted MOFs. [349] The method explicitly considers the synthetic viability of the linker species by using known chemical transformations and a precursor library of commercially available molecules for an in silico search of linker space. The algorithm is initiated with a population of 100 linker molecules and in each generation a linker molecule is subject to several filters; if the linker species passes all filters an MOF is built with a selected topological net. The newly constructed MOF is evaluated for methane deliverable capacity with a pressure swing of 65-5.8 bar, at 298 K, and the linker is inserted into the population in rank order, whilst maintaining a population size of 100; hence, the lowest rank is discarded. Using this approach and MOF-5 as a benchmark, the authors found 48 predicted MOFs in four nets, amongst the nine used, having higher deliverable capacity than their benchmark material. Instead of exhaustively screening large databases of porous materials, the authors have evolved MOFs in composition space whilst taking into account the known constraints of chemical synthesis; providing a rare connection between HTCS and synthetic chemistry. Other Adsorption Applications Genetic algorithms (GA) have also been deployed by Collins and co-workers who developed a GA that makes use of experimentally realized MOFs. [350] Their algorithm searches materials space with respect to the functionalization of these structures in order to optimize adsorption properties of interest. The GA was applied in an effort to maximize CO 2 uptake in 141 experimentally characterized frameworks; the myriad of combinations possible due to the functionalization of the linker species led to a total search space of 1.65 trillion structures. Thirteen GA parameters were optimized using three properties: CO 2 uptake, surface area, and parasitic energy [239] (which been discussed in Section 3.3.2); where the gas adsorption properties were determined from GCMC simulations. A unique mutation algorithm was employed that replaced a chosen functional group with a chemically similar analog, determined by electrostatic and van der Waals potentials, and local steric availability. Remarkably, CO 2 uptake was explicitly calculated for only ≈580 000 frameworks to screen the entire search space; leading to the identification of 1035 derivatives of 23 parent structures that displayed exceptional CO 2 uptake. In a recent study, Li et al. evaluated ethanol as a working fluid for alcohol-based adsorption driven heat pumps. [351] This was done by using GCMC integrated with standard thermodynamic equations to screen ≈2900 MOFs with high-quality ab initio derived point charges from the CoREMOF [311] database. A three-tiered screening process was deployed, systematically increasing the number of MC steps in each simulation round, and simulating: evaporation, condensation, and desorption. From this physicochemical properties were extricated that influenced the coefficient of performance for cooling (COP C ). Analysis showed frameworks with LCDs between 10 and 15 Å correspond to a high ethanol working capacity and relatively low enthalpy of adsorption, which maximizes the COP C value. Moreover, principal component analysis and decision tree modeling were employed to determine the dominant features affecting performance, revealing that LCD, working capacity, and enthalpy of adsorption were key descriptors for COP C . In particular pore size played a dominant role in the COP C value for MOFs with a low working capacity, whilst enthalpy of adsorption dominated in those with high working capacity. Carbon Capture and Separation Modeling the separation of gaseous species is a more complex task than pure component adsorption, as either multi-component simulations must be conducted or approximations using pure component data must be employed to assess a material's capacity. Diffusion effects can also play a more dominant role in separation processes and the metrics used to rank promising candidates must be defined on a case-by-case basis. One of the most intensely studied separations is that involving CO 2 capture, as the development of carbon capture and separation technologies is of high importance to mitigate greenhouse-gas emissions. [352] Using their previously constructed database of over 130 000 hypothetical structures, Wilmer and co-workers computed pure component adsorption data for CO 2 , CH 4 , and N 2 ; the results were used to calculate five adsorbent evaluation criteria (shown in Table 3) for four different separation cases, based on pressure swing adsorption (PSA) and vacuum swing adsorption (VSA) processes. [353] The resulting data was scrupulously analyzed to identify structure-performance relationships, which revealed trends that were not apparent from studies with fewer samples. In the case of landfill gas separation via the PSA process, optimal working capacity was achieved in structures with heats of adsorption ≈21 kJ mol −1 and optimal selectivity was achieved in MOFs with void fractions ≈0.6 to 0.8. On the other hand, for the sequestration of CO 2 from flue gas via the VSA process, where CO 2 has a lower partial pressure, selectivity was optimized in structures with void fractions in the range ≈0.3 to 0.4. The competitive adsorption of water in MOFs is an important consideration when modeling processes that occur under humid, or even ambient, conditions. This aspect is often neglected in simulations, probably due to the complexity of accurately modeling water adsorption. However, Li et al. tackled this issue by screening ≈5000 CoREMOF [309] structures for their CO 2 /N 2 /H 2 O sorption selectivity. [354] The authors' screening procedure consisted of: i) efficiently assigning framework charges with the EQEq method, ii) employing Widom particle insertion to calculate Henry constants for each species, iii) refining the search space through selectivity-based ratios of CO 2 /H 2 O Henry constants, iv) recalculating framework charges with the higher accuracy REPEAT method [329] for 15 top performing MOFs, v) deploying GCMC to compute binary CO 2 /H 2 O and ternary CO 2 /N 2 /H 2 O sorption selectivity. By cross-comparison of the data produced by the two charge schemes, it was found that Henry constants for H 2 O were more sensitive to the choice of method for estimating partial charges than CO 2 and N 2 . Additionally, the MOF-guest interaction energies showed that Coulombic interactions contributed a higher fraction of the adsorption energy of H 2 O than the other two non-polar adsorbates, again emphasizing the importance of selecting an internally consistent approach for calculating framework charges when modelling competitive adsorption. The inorganic/organic hybrid nature of MOFs provides the possibility of highly tunable structures. Searching the combinatorial space of these frameworks by introduction of multiple linker types and functionalities can have synergistic effects. Multi variate MOFs (MTV-MOFs), those containing multiple linker types within one structure, have been experimentally realized for their selective soprtion capabilities. [355] Later, Li et al. built on this idea by conducting a systematic large-scale screening of ≈10 000 computer-generated MTV-MOFs to probe the structures for their CO 2 /N 2 sorption selectivity and CO 2 capacity. [356] The construction process consisted of generating pcu topology frameworks, with copper paddlewheel nodes and 20 organic linkers; each MTV-MOF consisted of three linker types, functionalized with -F, -NH 2 , and -OCH 3 . 560 unfunctionalized parent frameworks and 10 monolinker MOFs were constructed for cross-comparison. CO 2 uptake and CO 2 /N 2 sorption selectivity were computed with single component and multi-component GCMC, respectively. By grouping MTV-MOFs by families of unfunctionalized parent structures and taking the average of the adsorption properties per family, the authors found that the functionalized derivatives exhibited better CO 2 /N 2 selectivity and higher CO 2 capacity than their parent counterparts; except for seven frameworks all with largest cavity diameters between 5 and 6.5 Å. The enhancement in CO 2 selectivity and capacity caused by functionalization was maximized in MOFs exhibiting small pore geometries. However, if the pores are too small then functionalization blocks accessibility and causes a reduction in gas adsorption, revealing important descriptors for the design of new high performance frameworks. Another important application involving the separation of gas mixtures is the pre-combustion processing of high pressure streams of CO 2 /H 2 mixtures. [357] Chung et al. sought to identify top MOF candidates for the selective uptake and sequestration of CO 2 from the aforementioned mixture. [358] A novel aspect of this HTCS approach was the development of a GA for the in silico discovery of high performance porous networks for this application. Starting with an initial population of 100 hMOFs, each generation was evolved by implementing elitism and applying genetic operations to hMOF pairs to form a subsequent generation (maintaining a population of 100 frameworks Table 3. Adsorbent evaluation criteria used by Bae and Snurr to assess the effectiveness of porous materials for CO 2 separation and capture. [352] N, γ, and the superscripts "ads" and "des" refer to number of molecules, the mole fraction in the gas phase, adsorption, and desorption conditions, respectively. Reproduced from Bae and Snurr. [352] at each step). Three independent GA runs were performed to optimize the three fitness measures, namely CO 2 working capacity, CO 2 /H 2 selectivity, and adsorbent performance score (which is the product of the former two fitness measures) computed by "on the fly" GCMC simulations. The GA runs produced data for the fitness criteria of 730 unique hMOFs, of which ≈50 frameworks showing high performance for this application were extracted. The remarkable aspect of this GAguided search was the reduction in computational cost relative to a brute-force screening approach, which is shown in Table 4. Alternative Separation Materials and Processes Whilst MOFs have shown great promise as adsorption-based carbon capture technologies, recently, an area of interest for HT researchers has been mixed matrix membranes (MMMs) as potential candidates for CO 2 capture. MMMs consist of polymer membranes with inorganic particles (in this case MOFs) dispersed throughout the polymer matrix. Budhathoki and co-workers conducted a multi-scale study, computing the selectivity of CO 2 /N 2 mixtures, CO 2 permeability, and carrying out techno-economic evaluations, presenting a novel connection between atomistic MOF structures and the cost of carbon capture (CCC). [359] The authors screened both CoREMOF [309] and Wilmer's database, [314] and in order to take into account whether MOFs in the MMMs were CO 2 /H 2 O sorption selective or not, the selectivity data from Li et al. [354] was used to rank real MOFs for their selectivity. By calculating the permselectivity of each framework and employing the Maxwell model, [360] along with experimental data for nine polymers, the gas permeabilities of over a million MMMs were computed. The authors went on to use process modeling to assign a predicted CCC to each hypothetical MMM in their repository. Their analysis showed MOFs with LCD in the range 4-10 Å and PLD in the range 4-5 Å had superior adsorption and diffusion selectivity, respectively. Furthermore, techno-economic evaluation found 1153 MMMs were predicted to yield a low CCC, 16 of which were based on CoREMOF structures with favorable CO 2 /H 2 O sorption selectivity. The final application we have chosen to highlight relates to the industrially important process of separating linear and monobranched hexane isomers from their dibranched counterparts to enhance the research octane number. Chung and co-workers conducted a screening of both MOFs, from the CoREMOF database, [309] and zeolites, taken from the IZA, [195] in order to identify optimal adsorbent materials for this separation process. [361] Unlike the previous HT separation studies, which focus on small gaseous molecules modeled as rigid bodies, hexane isomers require full flexibility for accurate property prediction. Therefore, the authors employed CB-GCMC (see Section 3.2.2) in order to produce distributions of adsorbate conformers and used the Widom particle insertion approach to calculate Henry constants. A pore size cutoff and selectivitybased ratios of Henry constants were used as an initial screen to reduce the sample size to 501 structures. Selectivities for an equimolar five-component mixture of hexane isomers was computed for the refined set of structures, which were then ranked by the structure's affinity to adsorb linear and monobranched hexane isomers. The outcome of the screening procedure was the identification of 22 high performance candidates for this separation process. The authors went on to further assess three structures' viability for this industrial process by conducting column breakthrough simulations. Moreover, by assessing the role of channel shape and conducting thermodynamic analysis, important molecular-level insights of this separation process were discovered. The studies covered in this section give a broad overview of HTCS methodologies that can be implemented to study particular applications and seeks to highlight their limitations and rate determining steps. GCMC or MD simulations are not individually expensive in the case of adsorption and separation of small gaseous molecules, but the cumulative cost of running these calculations across the vast number of structures available can lead to intractable simulation time frames. In these cases, development of structural screens and semi-empirical methods to restrict the search space are key to the efficient identification and evaluation of promising candidates. This has been highlighted through the use and development of analytical equations to approximate methane [347] and hydrogen [308] storage in MOFs, the identification of key descriptors for materials, and GA-guided searches through combinatorial MOF space. [349,350,358] In the case of flexible guest molecules, further problems arise as the computational cost increases dramatically when CB-GCMC is employed. These issues can be addressed by implementing initial filters that assess a framework's validity in terms of both its structural characteristics and chemical composition, taking the form of pore size cutoffs and ratios of Henry constants. [361] This allows further restriction of the search space to only the most promising structures, necessary for offsetting the cost of these simulations. Key information produced by these studies includes the synergistic combinations of structural and chemical descriptors that can be extricated from high performance candidates. However, these relationships are not always linear and machine learning algorithms may need to be deployed both to identify these dominant features and to identify feature combinations or frameworks not included in the original data set. Applications of Machine Learning The combinatorial nature of MOF structures lends itself to HT in silico investigation. However, depending on the time investment associated with assessing these structures' properties, simulation time frames can become unfeasible when deployed across vast databases of frameworks. ML proves to be an invaluable asset in this regard as employing these technologies can make virtual screening of colossal search spaces feasible. The data produced by HTCS can be used to train such algorithms, which can be deployed to: identify nonlinear structure-property relationships that were previously not discernible, further search MOF space to identify high performance candidates not present in the given data, and to guide the in silico synthesis of new frameworks for application. The field of HT-assisted machine learning is an ever-growing research area and has been recently reviewed; [362] we give here a brief overview of the descriptors and methodologies that can be implemented for data analysis and acquisition, and materials discovery. Fernandez et al. employed quantitative structure-property relationship (QSPR) models using multiple linear regression (MLR) analysis, decision tree regression (DT), and nonlinear support vector machines (SVM) to systematically correlate MOF structural features with their methane storage performance. [363] Using six structural descriptors and methane storage data calculated for ≈130 000 hMOFs, the authors identified the framework void fraction and dominant pore diameter as the key features affecting methane storage. It was also found that SVMs outperformed MLR and DTs in the prediction of methane storage. Response surfaces of methane uptake produced by the SVM showed that the MOF database contained a limited distribution of void fraction and dominant pore diameter combinations and identified a maximum corresponding to combinations of these features not present in the original data set. The same group went on to do preliminary QSPR analysis on the same library of MOFs to examine their CO 2 capture properties and observed poor correlation between geometrical features and CO 2 capture at pressures relevant for gas separation applications. This led to the development of a novel atomic property weighted radial distribution function (AP-RDF) descriptor tailored for large-scale QSPR predictions of gas adsorption in MOFs. [364] The group trained an SVM with AP-RDF scores to categorize MOFs as having high or low CO 2 capacity and, when applied to a test set of ≈290 000 structures, the QSPR classifier could recover 945 of the top 1000 MOFs whilst flagging only 10% of the repository for compute intensive screening. [365] Using these machine learning classifiers to supplement high throughput workflows could lead to orders of magnitude reduction in the computational expense associated with HTCS. Furthermore, Simon et al. implemented a hybrid ML/molecular simulation workflow to a database of over 600 000 experimentally realized and hypothetical porous materials for their Xe/Kr sorption selectivity. [366] The authors trained a random forest of decision tree regressors using a training set of 15 000 structures, six structural descriptors, and the Voronoi energy (this novel descriptor takes into account both structural features and the energetics associated with guest-host interaction). [367] The model was applied to their entire repository of porous networks and molecular simulation was used to assess only the most promising candidates. Of the ≈600 000, the random forest predicted 20 000 structures as promising candidates; these subsequently underwent further evaluation by GCMC simulation. This demonstrates further the use of this screening paradigm in refining the necessary search space for high performance candidate identification. Recent work conducted by Zhang et al. sought to develop a generative model for the in silico synthesis of high performance frameworks. [368] By selecting ten different combinations of metal nodes and topologies, and deploying an algorithm utilizing Monte Carlo tree search combined with a recurrent neural network, the authors were able to search composition space and tailor novel MOFs to target applications. The algorithm's use was demonstrated by applying it to the case study of methane storage and carbon capture (estimating the reward function by GCMC simulation), where it successfully and efficiently designed high performance frameworks for these applications. Moreover, topological data analysis was employed to assess whether the set of novel MOFs generated were sufficiently diverse in their composition. As a similarity measure, organic linkers were represented using a topological fingerprint [369] and pairwise comparisons of each framework was conducted. This demonstrated that the algorithm was able to generate a diverse set of high performing structures, as opposed to being constrained to derivatives of a particular composition. In a similar vein, Lee et al. sought to identify top MOF candidates for methane deliverable capacity; [370] the authors developed an advanced MOF construction algorithm with the capability to generate ≈247 trillion structures by utilizing 1775 topologies and a large variety of structural building units. By combining an evolutionary algorithm with an artificial neural network, the authors were able to efficiently parse a search space of over 100 trillion structures; this led to the identification of 96 frameworks that exceed the current world record for methane deliverable capacity. As highlighted in this section, ML can be a powerful tool for structure and property prediction; however, the Achilles heel of these algorithms is the data they are trained on. Whilst importance weighting can used in order to produce algorithms that are unbiased by anomalous data, large quantities of erroneous data produced by HTCS procedures may skew ML predictions, hindering the rate of materials discovery. Critical Remarks and Further Studies HTCS, as seen in the studies covered in this section, has proven to be an invaluable tool for the fast identification of porous materials for particular applications, helping to expedite the time between hypothesis and discovery. However, non-uniformity of methodology for database construction and simulation parameters can lead to hazardous ramifications such as: highly skewed simulation statistics, unrealistic representation of guest-host interactions and computational sampling of MOF space that is not necessarily representative of the topological and structural diversity of real MOFs. The methodological differences in constructing the CSDderived databases present a challenge for HTCS, as different operating conditions require different treatment of solvent and automating this process can be onerous when deployed over large samples; bringing into question whether the simulated crystal structure is representative of the true nature of the framework. To address the latter, work has been conducted to identify erroneous structures labeled as identical between the CSD non-disordered MOF subset [312] and the CoREMOF [309] database. [371] By computing pure component and binary mixture data for CH 4 and H 2 , and assessing four adsorption performance metrics for 3490 identically labeled structures, it was determined that 387 frameworks produced significantly different simulation statistics between the two databases. The cause for this was the difference in the structural information present for each framework in the repositories; the five cases outlining how the structures information differed between the CSD non-disordered MOF subset and the CoREMOF database are shown in Figure 16. Further to this, different synthetic conditions coupled with the solvent removal techniques used in database construction and/or geometric relaxation of the structures can lead to duplicate frameworks being present within databases. Barthel et al. [372] sought to systematically identify topological duplicates in a set of 502 DFT optimized structures from the CoREMOF database. By assessing the invariance of a set of descriptors such as the atom type, number, and those derived from the graph describing the frameworks bonding network, it was found that only 72.5% (364) of the 502 structures were reliable, with the other 27.5% having incorrect structural information or being redundant duplicates. The findings of these two studies highlight the need for caution when conducting HTCS procedures, as if these erroneous structures are left untreated the statistics produced can be skewed and may even lead to the identification of candidate materials that could be fictitious. The building block nature of MOFs has been heavily exploited by the computational community, indeed the number of hMOFs far surpasses the number of known experimentally reported frameworks. Many studies make use of the hMOF repositories to produce large quantities of structure-property information that constitute design criteria for new structures. However, the topological diversity in hypothetical framework databases can be limited and therefore structure-property information produced can be bias toward the dominant topology present in the data set; this limits the transferability of these structures to shape-selective processes like catalysis. However, work is being conducted to mitigate this hindrance, such as implementation of the RTA [317] in the ToBaCCo code [318] and further developments of this code in extending the number of nets to include non-edge-transitive topologies. [373] One pressing question is the synthetic feasibility of the frameworks. Work by Anderson and Gómez-Gauldrón addressed this issue; they devised a computational approach to assess the MOFs in (C) are color-coded based on the cases they were categorized in, where uncolored data points represent the MOFs that were corrected by the authors during the study and double-colored data points represent the MOFs where two cases apply. A-D) Reproduced with permission. [371] Copyright 2019, Royal Society of Chemistry. synthetic likelihood of computer-generated MOFs. [374] Their study provides evidence that crystal free energies could be key to understanding the synthetic likelihood of hypothetical structures. Establishing in silico procedures for assessing synthetic viability, structural, and hydrothermal stability of MOFs will play a key role in restricting the ever-growing search space of these materials to only those that could be viable candidates for industrial processes. In regard to the force fields used in HTCS, generic force fields available for framework atoms and those that are optimized to reproduce properties of the adsorbates differ by their treatment of the van der Waals and electrostatic components. Efforts have been made to systematically quantify the discrepancies of simulation data produced between different force fields. McDaniel et al. compared the validity of the EMP2 force field for CO 2 [375] and the TraPPE force field for methane [376] with a benchmark ab initio derived force field. [377] It was found that whilst gas uptake is relatively insensitive to force field choice at high pressures (assuming accurate adsorbate-adsorbate potential), MOF-guest interaction is of higher significance at low pressures and the accuracy of standard force fields is dependant on functional and topological features of the structures as well as the adsorbate type. In the case of CO 2 , polarization and van der Waals interactions have distinctly different effects on the adsorption site distributions in MOFs; however, good correlation between the ranking of the force fields was found in general. For HTCS studies, where the goal is to identify the top percentage of MOFs for a given application, employing generic force fields is a reasonable approximation. However, caution is still warranted, especially where performance metrics that rely on the sampling of configurational space are used. Despite these shortcomings HTCS, as seen in this section, is a powerful and invaluable tool for efficient property prediction. The data produced from these studies helps both computational and experimental scientists to refine the search space of whichever property they are looking to optimize. However, data is not always made publicly available and when it is, it may not be in a form that is easily intelligible or parsed. Notably, Moghadam and co-workers screened ≈2 900 frameworks with high-quality ab initio charges, developed by Nazarian et al., [311] for their oxygen deliverable capacities. [378] The arising data was then made available in an open-access interactive 5D visualization and data-mining tool, allowing 1000 unique structure-property relationships to be generated according to the user's interests. Open-access tools such as these are a desirable commodity that allow experimental or simulation effort to be focused on only the most promising candidates for given applications. Covalent Organic Frameworks We now very briefly highlight selected developments from the COFs to illustrate the use of techniques and approaches which have only partially been adopted or exploited in the zeolite and MOF community. The aim of this section is not to review these fields in toto but to pick out some novel studies that could inspire different approaches within the MOF and zeolite fields. COFs are 2D or 3D porous crystalline polymeric materials composed of the light elements H, B, C, N, O, and S or P. Like zeolites and MOFs, secondary building units are linked to form well-ordered, crystalline materials where the SBUs in this case are organic molecules. COFs are attractive gas storage materials owing to the strength of covalent bonds, and their intrinsic light weight frameworks due to their chemical composition. Despite the almost unlimited potential for forming frameworks, the number of experimentally reported COF structures is still relatively small, less than 500, and the number of computational studies in this area is also relatively small. For a comprehensive introduction and overview of COFs and their uses, the reader is referred to several excellent reviews on this topic. [379][380][381] We note that the structure of COFs is usually less ambiguous than that of MOFs, for example; disorder is less common, and solvent is typically weakly coordinated to the framework rather than completing the coordination shell of an atom such as a metal. However, partial interpenetration of the networks is a feature of some COFs, leading to an occupancy of a framework of less than unity for the averaged crystal structure. There are relatively few reports of experimental HTS of COFs but a notable early paper is that of Dogru et al. [382] who synthesized the mesoporous material BTP-COF. The structure is notable for the large pore sizes of 4 nm and also for the synthetic approach. The synthetic route and parameters were reported to have been optimized through a robotized dosing system, although the number of permutations was not reported. A recent study by Wang et al. [194] reported the use of divergent synthesis strategies to generate pools of modified reagents. These were then combined to generate eight hitherto unreported COFs using a multi-step synthesis route. Such a systematic approach would appear to be very amenable to robotic synthesis, potentially informed by successful and unsuccessful experiments in the manner reported by Moosavi et al. [294] for MOFs. An early study due to Bureekaew and Schmid [383] used a small pool of reagents to generate hypothetical structures in a variety of topologies. The structures were fully optimized using the ab initio derived MOF-FF [384] force field which yielded predictions of the lowest energy structure out of the possible topologies, although the overall feasibility of these frameworks was not reported. A notable feature of the work, apart from being one of the earliest studies of hypothetical COFs, was the use of a GA to identify the optimal orientation of ligands within a given topology. Martin et al. [385] reported an early attempt to shrink the synthesis space of COFs according to known synthetic routes and using commodity/commercially available reagents for the bridging linkers. Since COFs lack metal constituents, these structures lend themselves to energetic ranking by more sophisticated approaches than simply force fields. In this study, Martin et al. used the PM6-DH2 method within MOPAC12 (applying periodic boundary conditions) to optimize 620 noninterpenetrated frameworks and these structures were then assessed for their capability to interpenetrate using Zeo++. [214] Combining all permutations of the 620 frameworks to form interpenetrated structures, a total of 4147 structures were found to be geometrically well matched. These 4147 structures were finally optimized using DFT to identify structures for potential methane storage applications. By explicitly considering the economy of synthesis, simulation starts to push the technology Outlook In this review, we have sought to highlight developments in the field of porous materials toward the realization of a wholly integrated workflow between experiment and computer simulation and analysis. While synthetic approaches are relatively mature and the new vanguard is automating synthesis, new approaches are being developed that promise to dramatically improve the efficacy of synthesis, such as the "chemputer". [396,397] For simulation and analysis, there is greater scope for improving interoperability and veracity. Any computer model is necessarily a simplification of the experimental conditions and so integration of experimental results provides invaluable data to assess the robustness of the current model and refine it. Physical experiments are the ultimate test of a model's predictive power; on one level, the experiment may reveal inadequacies in the parameterization of the model. These deficiencies may lead to an incorrect prediction of the thermodynamic ground state, but with adequate training, a useful model should be trustworthy and accurately predictive. However, a far more subtle and complex aspect of the physical experiment is that it probes kinetic aspects of the reaction. Typical syntheses of the materials discussed in this review require several hours and this timescale is not accessible to the models that are typically used for this type of prediction (DFT-based simulations, FF simulations, and DFT trained/machine learned models). In terms of improving models, the advent of machine learning approaches, taught from increasingly large and diverse databases of ever more reliable DFT data, should ensure that the models have the potential to become ever more robust. The inexorable growth of computing power means that an adequate training pool for geometries and their associated energies will be achieved more readily. There are cross-cutting and interdisciplinary challenges to developing more automated materials discovery processes. It is noteworthy that often techniques are blooded in the metal oxide field before finding application in the porous materials area, so more closely monitoring that literature could help expedite new methods in the porous materials field. In addition, here, we list areas where there are opportunities to enhance the interoperability of techniques and to reduce the time overhead in some rate limiting steps within the virtuous circle of computation, experiment and analysis: 1. Reducing the initial cost of robots and platforms. More widespread adoption of automated experimental workflows, such as the "chemputer" approach, [396] to design and print bespoke vessels for the delivery of molecules and solids, and robotic approaches [398] will drive down the economic cost of the physical components. Beyond this initial outlay, there is the cost of developing the software to enable the interoperability of the components which may be both significant and highly specialized. 2. Harnessing of interdisciplinary skills. Programming of robots and writing codes capable of automatically interpreting the outcomes of experiments, capitalizing on ML approaches and acting upon them to design new experiments, additional characterization, or new virtual screening experiments are currently rare skills. Aside from stepping outside the traditional recruitment and collaboration field to tap into the knowledge of engineers, mathematicians, and computer scientists, there is an impetus to train up physical scientists to be conversant and comfortable with using these new integrated approaches in research. There is a growing need for interdisciplinary methods to be taught at the undergraduate level, in order to enhance their training and ensure that the start-of-the-art approaches, robotics, ML, etc., as described, are adopted more widely and become ingrained in the design of experiments. 3. Repurposing of synthetic and characterization methods from other classes of materials. For example, physical and chemical vapor deposition (CVD) are very well-established techniques . LAG is liquid-assisted grinding. Switching from solvothermal to aqueous or LAG syntheses has strongest affect on reducing the total cost of synthesis, followed by mass ratio of salt and linker to solvent. Reproduced with permission. [404] Copyright 2017, American Chemical Society. Figure 18. An example of the assessment of a greater diversity of class of nanoporous materials, enabled by consistent computational settings. In the search for outstanding materials, the facility to search across completely different chemical compositions and distributions of porosity could lead to important discoveries, especially when combined with ML approaches. The particular databases mined here actually show strong overlap of the properties, which could indicate that looking to other classes of materials (e.g., amorphous materials) could be fruitful for certain applications. Reproduced with permission. [237] Copyright 2015, Royal Society of Chemistry. that have been used in materials discovery for decades and have been successfully used for the high throughput phase screening of metal oxides. [399,400] Stassen et al. [401] reported how MOFs can be synthesized using CVD, offering new routes to controlling the surface chemistry of MOFs. However, taking inspiration from other fields, CVD offers an intuitive way to exploring, for example, multivariate MOFs on a single sheet, that is, to produce a printable phase diagram. Whilst it is now possible to perform many characterizations (e.g., XRD) in a high throughput fashion, some analysis is still incredibly time consuming and manual, for example, HRTEM. [402] Nevertheless, disruptive techniques such as machine learning are beginning to reduce the time overhead to this most onerous of techniques. [403] Similarly, strides in automating time consuming characterization methods such as electron diffraction have started to address these experimental bottlenecks. [191] 4. Incorporation of economic factors and considerations (including potentially upstream costs, sequestration, disposal, or processing of harmful by-products) in the initial synthetic and computational screening process. There are relatively few examples in the literature but the study by DeSantis et al. (see Figure 17) [404] focused on identifying factors that could drive down MOF synthesis costs to <10 $ kg −1 provides an excellent example of the potential value of these studies. Similarly, assessing reagent costs as part of the screening process, as described in Martin et al., [385] is clearly a powerful approach toward identifying commercially viable materials and this could be employed more routinely in truly integrated workflows. 5. Coordination between materials disciplines to improve the transferability and consistency in data. Harmonizing of electronic databases allows for scanning of multiple materials types, as depicted in Figure 18, [237] to resolve the issue of inconsistent computational settings between databases. With a sufficiently powerful machine learning approach, structure/ energy relationships from one database could be "corrected" to place them within the same potential energy landscape of another, obviating the need to re-evaluate all structures at the same level of theory. 6. Improving the veracity of computer simulations. Density functional theory has advanced tremendously over the last decade or two in terms of accuracy and transferability. There are still problematic cases, such as highly correlated systems and excited states, but the advent of quantum Monte Carlo and potentially quantum computing approaches means the chemical accuracy and range of chemical states in databases will only increase. Ever-growing databases could be used to develop machine learnt potentials, in the spirit of Gaussian approximation potentials [405] with the ability to describe reaction chemistry, like reaxFF. [327] Another opportunity is to use ab initio or DFT data to further improve tight binding (such as GFN2-xTB), [406,407] or semi-empirical DFT approaches (such as HSEsol-3c, PBEh-3c, and HF-3c [408] ) in order to reduce the mean error associated with these approaches. 7. Machine learning to harvest and collate existing data while rejecting anomalies. It has been long proposed that machine learning methods could be used to extract data from published works and thus combine information from disparate sources to produce a far more comprehensive virtual compendium or lab book. Indeed, Jensen et al. [244] have recently applied this technology to germanosilicates and advocated that the approach could be used for other problems in zeolite synthesis and to gain greater insight into the factors affecting crystal habit. One potential hazard in this approach is the veracity of the data in the literature. Similar approaches have been used in the pharmaceutical industry where it is chastening to learn that "An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in ≈US$28 000 000 000 (US$28B)/year spent on preclinical research that is not reproducible-in the United States alone". [247] However, we can anticipate that machine learning models could be trained to detect suspect or identify substantially outlying data. The emergence of studies such as Moosavi et al. [294] and Porwol et al. [409] that generate valuable data from Figure 19. Proposed integrated workflow reproduced from Greenaway et al. [398] The rate determining steps are easily identified by the wristwatches in the figure. Reproduced under the terms of the CC-BY Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0). [398] Copyright 2018, The Authors, published by Springer Nature. It is also instructive to look to other fields for inspiration of how to improve HT approaches for zeolites, MOFs, and COFs. Robotic synthesis is playing an increasingly important role in performing HTS. For instance, Greenaway et al. [398] used a combination of computational screening and robotic synthesis to expedite exploration of 78 precursor combinations resulting in the identification of 32 new porous organic cage molecules. Figure 19 shows the entire workflow used in this study. Interestingly, consideration is given to the cost of reagents tensioned against the cost of the computation. In this particular study, the computational role varied depending on whether the organic building blocks were commercially available and inexpensive, or required substantial synthetic effort. For time-consuming and expensive synthesis to be justified, a complete assessment of the potential stability and properties of the cage materials was performed to establish the potential high value of the likely products of synthesis. In other cases where the building blocks were commodity chemicals, the computational screening was sidestepped. In future work it was noted that elimination of low value targets would be a desirable aspect of screening, presumably, when the compute cost of ranking and screening properties becomes even more tractable through advances in computer software, hardware, and machine learning approaches. [410] Additionally, adding a feedback loop to learn from successful and unsuccessful experiments, [294] as previously discussed in Section 4.3 could greatly accelerate the discovery process. Notwithstanding the opportunities listed above, by harnessing the full gamut of techniques surveyed in this review and emerging technologies, it is hoped that fully integrated, self-learning, and self-guided identification of promising materials for real-world applications will become increasingly tractable, to avoid making poor pore choices.
41,154.4
2020-09-21T00:00:00.000
[ "Materials Science", "Chemistry" ]
Enzymatic Polymerization on DNA Modified Gold Nanowire for Label-Free Detection of Pathogen DNA This paper presents a label-free biosensor for the detection of single-stranded pathogen DNA through the target-enhanced gelation between gold nanowires (AuNW) and the primer DNAs branched on AuNW. The target DNA enables circularization of the linear DNA template, and the primer DNA is elongated continuously via rolling circle amplification. As a result, in the presence of the target DNA, a macroscopic hydrogel was fabricated by the entanglement of the elongated DNA with AuNWs as a scaffold fiber for effective gelation. In contrast, very small separate particles were generated in the absence of the target DNA. This label-free biosensor might be a promising tool for the detection of pathogen DNAs without any devices for further analysis. Moreover, the biosensor based on the weaving of AuNW and DNAs suggests a novel direction for the applications of AuNWs in biological engineering. Introduction Deoxyribonucleic acid (DNA) has significant potential as a multifunctional material, and its programmability through base-paring of DNA [1,2] has enabled the use of DNA as a building block.By taking advantage of DNA as a versatile material, a wide range of applications have been introduced, such as OPEN ACCESS drug delivery [3,4], pathologic diagnosis [5,6], microscale patterning [7,8], arraying nanomaterials [9,10], and construction of structures in various shapes [11][12][13].Furthermore, the conjugation of DNA with various materials, such as gold [14,15], silver [16,17], and magnetic core [18], have endowed additional functions to the DNA nanostructures.Among them, gold has been adopted widely for its optical and photothermal properties induced by its localized surface plasmon resonance in a variety of shapes, such as nanoparticles [19,20], nanorods [21], nanoplate films [22], and nanoclusters [23].To date, however, there are few reports on the use of gold nanowires (AuNWs) in biological engineering.In addition, AuNWs in the majority of the studies focused on gold nanoparticles assembled in a line rather than gold nanowires per se [24][25][26]. This paper proposes a label-free biosensor for single-stranded DNA detection induced by the interlacing of AuNW and primer DNA branched on the AuNW.The rolling circle amplification (RCA) technique was used on the branched primer DNA.RCA is an isothermal and enzymatic process, which enables the continuous amplification of DNA sequences [27][28][29].In the present study, macroscopic hydrogel was produced via the entanglement of elongated primer DNA and AuNWs in the presence of the target pathogen DNA.To the best of the authors' knowledge, this is the first approach of the use of AuNWs as a scaffold fiber for effective gelation.Using this biosensor, the existence of pathogen DNA was distinguishable by naked eye, which may offer a novel way for point-of-care diagnosis. Design of AuNW: DNA Based Biosensor To synthesize the AuNW: DNA based biosensor, the target DNA was first designed from influenza A virus DNA.Subsequently, 13 bases from the 5′ end and 14 bases from the 3′ end of the linear DNA were designed to be complementary to the 27 bases of the target DNA, leaving a few bases at both ends.By rational design, the linear DNA can be circularized in the presence of the target DNA by temperature annealing, as shown in Figure 1A.The discontinuous region between the 5′ and 3′ ends of the linear DNA was connected enzymatically by ligation, and closed circular DNA was formed.On the other hand, in the absence of the target DNA, the complete circularization of linear DNA is impossible, even after ligation. The 5′ end of primer DNA was modified chemically with an alkane thiol group to immobilize the primer DNA on AuNW, as shown in Figure 1B.After removing the excess primer DNA, the primer DNA immobilized on AuNW was allowed to hybridize with the linear or circular DNA for the initiation of RCA.Moreover, 12 bases (denoted as A) and the other 12 bases (denoted as A′) of the linear DNA were designed complementary to each other, which enabled efficient cross-linking. Because the surface of AuNW is covered with cationic cetyltrimethylammonium bromide (CTAB) molecules, it is essential to determine if the primer DNAs are functionalized on AuNWs by sulfur-gold bonding or electrostatic interactions due to the negatively charged nature of DNA.To examine the binding characteristic between the primer DNA and AuNW, the melting temperature between the primer DNAs on the AuNW and the linear DNAs were investigated using the DNA-intercalating fluorescence dye.A sudden decrease in the fluorescence intensity on the melt curve shown in Figure 2 suggests that the primer DNAs is immobilized on the AuNW through a sulfur-gold interaction.As the charge-induced binding between DNA and AuNW is irregular, programmed assembly between the primer DNA and linear DNA would be inhibited, and randomly denatured DNA would result in a gradual decrease in the fluorescence intensity on the melt curve rather than a sudden decline [30]. To demonstrate the target-enhanced gelation via rolling circle amplification, the previously circularized DNA template was allowed to hybridize with the primer DNA modified on AuNW.As the 3′ end of the target DNA is designed not to hybridize with linear DNA, the 3′ end of the primer DNA is the only initiation site for phi29 DNA polymerase.As shown in Figure 1B, the DNA strands were elongated continuously via RCA from the previously synthesized circular DNA in the presence of the target DNA.The AuNW is believed to work as a scaffold and amplified DNA strands on AuNW function as the side chains that facilitate expeditious gelation.In absence of the target DNA, however, RCA was not performed with the unligated linear DNA.Therefore, the primer DNA could not be elongated.As a result, macroscopic hydrogel is synthesized only in the presence of the target DNA.Also, same results were obtained when reacson was performed with library of non-complementary DNA, in the presence (see Figure S1) or absence of target DNA (see Figure S2), which implies potential for selective detection. Characterization of AuNW: DNA Based Biosensor After the RCA process was conducted on the AuNW: DNA based biosensor, the products were analyzed to confirm the potential utilization of the AuNW: DNA based biosensor for pathogen DNA detection.The digital camera image of the RCA product shown in Figure 3A showed that the hydrogel formed in the presence of the target DNA is apparently visible to the naked eye.In contrast, no macroscopic structures were observed in the absence of the target DNA (Figure 3C).After staining DNA hydrogel with DNA specific dye (GelRed), strong fluorescence from hydrogel also confirmed that the resulting product was made of DNA (Figure 3B).Moreover, the fluorescence microscopic image of the stained hydrogel showed that many thin DNA film-like structures were closely packed and interlaced with each other in the presence of the target DNA (Figure 3B).However, only small features were observed in the absence of the target DNA (Figure 3D).The microscopic structures of the RCA products in the presence of target DNA were investigated further by scanning electron microscopy (SEM) and transmission electron microscopy (TEM).The surficial structure of the hydrogel was decorated with porous ball-like particles (Figure 3E).The TEM image demonstrated in Figure 3F showed that DNAs were entangled with the gold nanowires as a result of the crosslinking of elongated DNA. Preparation of Thiolated Oligonucleotide-Modified Gold Nanowires The thiolated primer DNAs were deprotected by addition of 0.2 M DTT, 0.18 M phosphate buffer (pH 8.0) and incubated at room temperature for 2 h.The primer DNAs were then washed over a NAP-5 desalting column to remove DTT.Purified primer DNAs (final concentration of 1.875 μM) were then mixed with AuNW, phosphate (final concentration of 10 mM) and NaCl (1 M).After 20 h incubation at room temperature, the solution was centrifuged at 13,000 rpm for 20 min, and the supernatant was discarded for the removal of unconjugated primer DNAs.This washing step was repeated 4 times.Finally, the AuNW-Primer DNA conjugates (AuNW:Primer) were redispersed in nuclease-free water. Procedure for the Label-Free Detection of Target DNA To produce the circularized DNA, the linear DNA and target DNA were mixed in nuclease-free water at a final concentration of 7.5 μM each.Temperature annealing was performed for the efficient hybridization between the linear DNA and target DNA.The mixture was heated to 95 °C for 2 min, then cooled gradually to 25 °C over a 60 min period.After 20 min incubation at 25 °C, the annealed DNA was mixed with T4 DNA ligase (3 U•μL -1 ) and ligase buffer (300 mM Tris-HCl (pH 7.8), 100 mM MgCl2, 100 mM DTT and 10 mM ATP) and incubated overnight at room temperature to close the nick. Rolling circle amplification (RCA) was then performed as follows.First, 20 μL of the closed circular DNA (final concentration of 3 μM) and AuNW:Primer (30 μL) were combined and incubated for 2 h at room temperature for hybridization between the circular DNA and the primer DNA on AuNW.After incubation, unhybridized circular DNA was removed by discarding the supernatant after centrifugation at 13,000 rpm for 20 min.After washing twice, the resulting product was mixed with phi29 DNA polymerase (10 U•μL −1 ), phi29 reaction buffer (80 mM Tris-HCl (pH 7.5), 100 mM KCl, 20 mM MgCl2, 10 mM (NH4)2SO4 and 8 mM DTT) and dNTPs (2 mM).Finally, the solution was incubated at 30 °C for 20 h. Figure 1 . Figure 1.Schematic illustration of the label-free detection of single-stranded DNA.(A) Scheme of circular DNA preparation in the presence of target DNA; and (B) scheme of hydrogel synthesis with the prepared circular DNA.S indicates thiol group.Sequence of A and Aʹ are complementary. Figure 2 . Figure 2. Melt curve of the primer DNA modified on gold nanowires.The temperature was increased gradually by 0.5 °C from 20 to 95 °C. Figure 3 . Figure 3. (A-D) Digital camera images and fluorescence microscopy images of the products of rolling circle amplification in the presence (A,B) and absence (C,D) of the target DNA; (E) SEM and (F) TEM images of the AuNW-DNA hydrogel after rolling circle amplification with the target DNA.
2,289
2015-06-01T00:00:00.000
[ "Biology", "Chemistry", "Engineering", "Materials Science" ]
Hyperspectral Anomaly Detection Based on Low-Rank Representation and Learned Dictionary In this paper, a novel hyperspectral anomaly detector based on low-rank representation (LRR) and learned dictionary (LD) has been proposed. This method assumes that a two-dimensional matrix transformed from a three-dimensional hyperspectral imagery can be decomposed into two parts: a low rank matrix representing the background and a sparse matrix standing for the anomalies. The direct application of LRR model is sensitive to a tradeoff parameter that balances the two parts. To mitigate this problem, a learned dictionary is introduced into the decomposition process. The dictionary is learned from the whole image with a random selection process and therefore can be viewed as the spectra of the background only. It also requires a less computational cost with the learned dictionary. The statistic characteristic of the sparse matrix allows the application of basic anomaly detection method to obtain detection results. Experimental results demonstrate that, compared to other anomaly detection methods, the proposed method based on LRR and LD shows its robustness and has a satisfactory anomaly detection result. Introduction Distinguished from color and multispectral imaging systems, hundreds of narrow contiguous bands about 10 nm wide are obtained in hyperspectral imaging system.With its abundant spectral information, hyperspectral imagery (HSI) has drawn great attention in the field of remote sensing [1][2][3][4].Today most HSI data are acquired from aircraft (e.g., HYDICE, HyMap, etc.), whereas efforts are being conducted to launch new sensors on orbital level (e.g., EnMAP, PRISMA, etc.).Currently, we have Hyperion and CHRIS/PROBA.With the development of HSI sensors, hyperspectral remote sensing images are widely available in various areas. Target detection is one of the most important applications of hyperspectral images.Based on the availability of a prior target information, target detection can be divided into two categories, supervised and unsupervised.The accuracy of supervised target detection methods is highly related to that of the target spectra, which are frequently hard to obtain [5].Therefore, the unsupervised target detection, also referred to as anomaly detection (AD), has experienced a rapid development in the past 20 years [6,7]. The goal of hyperspectral anomaly detection is to label the anomalies automatically from the HSI data.The anomalies are always small objects with low probabilities of occurrence and their spectra are significantly different from their neighbors.These two main features are widely utilized for AD.The Reed-Xiaoli (RX) algorithm [8], as the benchmark AD method, assumes that the background follows a multivariate normal distribution.Based on this assumption, the Mahalanobis distance between the spectrum of the pixel under test (PUT) and its background samples is used to retrieve the detection result.Two versions named global RX (GRX) and local RX (LRX), which estimate the global and local background statistics (i.e., mean and covariance matrix), respectively, have been studied.However, the performance of RX is highly related to the accuracy of the estimated covariance matrix of background.Derived from the RX algorithm, many other modified methods have been proposed [9,10].To list, kernel strategy was introduced into the RX method to tackle non-linear AD problem [11,12]; weight RX and a random-selection-based anomaly detector were developed to reduce target contamination problem [13,14]; the effect of windows was also discussed [15,16]; and sub-pixel anomaly detection problem was targeted [17,18].Generally speaking, two major problems exist in the RX and its modified algorithms: (1) in most cases, the normal distribution does not hold in real hyperspectral data; and (2) backgrounds are sometimes contaminated with the signal of anomalies. To avoid obtaining accurate covariance matrix of background, cluster based detector [19], support vector description detector (SVDD) [20,21], graph pixel selection based detector [22], two-dimensional crossing-based anomaly detector (2DCAD) [23], and subspaces based detector [24] were proposed.Meanwhile, sparse representation (SR), first proposed in the field of classification [25,26], was introduced to tackle supervised target detection [27].In the theory of SR, spectrum of PUT can be sparsely represented by an over-complete dictionary consisting of background spectra.Large dissimilarity between the reconstructed residuals corresponding to the target dictionary and background dictionary respectively is obtained for a target sample, and small dissimilarity for a background sample.No explicit assumption on the statistical distribution characteristic of the observed data is required in SR.A collaborative-representation-based detector (CRD) was later proposed [28].Unlike SR, it utilizes neighbors to collaboratively represent the PUT.The effectiveness of sparse-representation-based detector (SRD) and CRD are highly correlated with the used dictionary, and dual-window method is a common way to build the background dictionary.A dictionary chosen by the characteristic of its neighbors was proposed through joint sparse representation [29], and a learned dictionary (LD) using sparse coding was recently applied to represent the spectra of background [30].However, these methods mainly exploit spectral information and have a high false alarm rate under the presence of noise, as well as a low detection rate when the background dictionary is contaminated by anomalies. Recently, a novel technique, low-rank matrix decomposition (LRMD) has emerged as a powerful tool for image analysis, web search and computer vision [31].In the field of hyperspectral remote sensing, LRMD exploits the intrinsic low-rank property of hyperspectral image, and decomposes it into two components: a low-rank clean matrix, and a sparse matrix.The low-rank matrix can be used for denoising [32,33] and recovery [34], and the sparse matrix for anomaly detection [35].A tradeoff parameter is used to balance the two parts in robust principal component analysis (RPCA) based anomaly detector, and the low-rank and sparse matrix detector (LRaSMD) requires initiated rank of the low-rank matrix as well as the sparsity of the sparse matrix [36].However, the results of RPCA and LRaSMD are always sensitive to the initiated tradeoff parameters. The low-rank representation (LRR) model [37] was first introduced to tackle the hyperspectral AD problem [38].Unlike the model of RPCA, the LRR model assumes that the data are drawn from multiple subspaces, which is better suited for HSI due to the mixed nature of real data.In the model of LRR, a dictionary, which linearly spans the data space, is required.In most cases, the whole data matrix itself is used as the dictionary matrix.When the tradeoff parameter is not properly chosen, an unsatisfactory decomposition result is obtained.In this paper, to improve its robustness, we analyze the effect of the dictionary on the LRR model and learn a dictionary from the whole HSI using sparse coding method [39,40] before applying the LRR model.A random selection method is used during the update procedure to mitigate the contaminating problem to get pure background spectra.When using the learned dictionary, the decomposition result will be more robust to the tradeoff parameter.A sparse matrix is obtained after decomposition, and basic anomaly detection method is then applied to retrieve the detection result.Finally, we will compare the proposed anomaly detector based on LRR and LD (LRRaLD) with the benchmark GRX method [8], the state-of-the-art CRD [28], and three other detectors based on LRMD including RPCA [35], LRaSMD [36] and the detector based on low-rank and sparse representation (LRASR) [38] to better illustrate its effectiveness. The contribution of the paper can be mainly described as follows: (1) compared to other AD algorithms, the intrinsic low-rank property of HSI is better exploited with the LRR model; (2) and the problem of sensitivity to parameters exists in detectors based on LRMD method.To mitigate this problem, a learning dictionary standing for the spectra of background is adopted in the LRR model to better separate the sparse anomaly part from the low-rank background part.The adopting of LD makes the proposed method more robust to its parameters and more efficient. The remainder of this paper is organized as follows.In Section 2, basic theory of LRR model and its solver are reviewed.In Section 3, the proposed anomaly detector based on LRR and LD is described in detail.In Section 4, experiments for synthetic and real hyperspectral data sets are conducted.In Section 5, conclusion is drawn. LRR Algorithm In this section, we provide a short review of LRR algorithm and its solver.It is an important technique used in our proposed approach. LRR Model Traditional principal component analysis (PCA) is a widely used technique for dimensionality reduction of high dimensional data.It can successfully recover the original data with a linear combination of a few principle components.The residuals are always viewed as small Gaussian noises.However, when the data are corrupted by anomalies with large magnitude, traditional PCA will fail.RPCA was proposed to tackle the above problem of PCA.The RPCA method can be described as follows: a low-rank matrix L is corrupted by a sparse matrix S and they are both unknown; only the observed data X is known; the goal is to recover L and S from the observed data.The optimization problem is: where λ is the tradeoff parameter to balance the low-rank and sparse components.However, the above problem is non-convex and NP-hard.To mitigate this difficulty, it is usually relaxed to the following convex problem: where ||¨|| ˚denotes the nuclear norm, which is the sum of its singular values, ||¨|| 1 denotes the l 1 norm, which is the sum of the absolute values of matrix entries.Unlike RPCA model, which has an underlying assumption that the data are drawn from a single subspace, the LRR model was then proposed to fit a situation that the data is derived from multiple subspaces [37].The optimization problem of LRR model is: where ||¨|| 2,1 denotes the sum of l 2 norm of the columns and D is the dictionary matrix that linearly spans the data space.By setting D " I, the optimization problem of Formulation (3) falls back to Formulation (2).It can be viewed that LRR model is a generalization of RPCA.The minimizer Z can be called the low-rank representation of X respect to the dictionary matrix D. As a result, the LRR model handles better on data based on multiple subspaces than normal RPCA.However, the method of LRR is quite sensitive to the tradeoff parameter λ.This problem will be mitigated and further discussed in Section 3. Solver of the LRR Model In this paper, the convex problem of Formulation (3) is solved with Augmented Lagrange Multiplier (ALM) and we convert the above problem to an equivalent statement as follows [37]: Then, the following Lagrange function can be obtained: where Y 1 and Y 2 are Lagrange multiplier and µ ą 0 is the penalty coefficient.To minimize the above function, inexact ALM can be used [37].The algorithm of inexact ALM is shown as follows: Algorithm 1. Solving Problem of Formulation ( 5) by Inexact ALM Input: data matrix X, tradeoff parameter λ, dictionary matrix D Output: sparse matrix S Initialize: Z " J " S " 0, Y 1 " Y 2 " 0, µ " 10 ´6, µ max " 10 6 , ρ " 1.1, and ε " 10 ´8. While not converged do Proposed Method In this section, the low-rank property of HSI and the modeling of anomalies and background are analyzed.The dictionary learning method is introduced in the second part of this section.The framework of the proposed method is then described and summarized. Low Rank Property of HSI and Modeling of Anomalies and Background The low-rank property of HSI is based on the linear mixed model (LMM).For a clean two-Dimensional HSI, X P R mnˆp (m and n are the length and width of HSI, respectively, and p is the number of spectral bands), a pixel can be represented by a linear combination of a few pure spectra of endmembers.When the HSI is clean and there exist r kinds of endmembers, it can be represented as: where A P R mnˆr is the abundance matrix and E " pe 1 , . . ., e r q P R pˆr refers to the endmember matrix.It can be found that rank pXq ď rank pEq " r (in most cases, r !p).As a result, the data matrix of HSI can be thought to have the intrinsic low-rank property and thus LRMD method can be well applied on the data matrix. In the field of hyperspectral remote sensing, the exact definition of anomalies is still unsettled.In this paper, small targets with significantly different spectra from their neighbors are considered as anomalies.Here we assume the first q endmembers are viewed as background and the rest have quite small occupations and can be viewed as anomaly.For a pixel x i P R p from HSI, it can be represented as: a i,j m j " # a i,q`1 " a i,q`2 " ... " a i,r " 0, if x i belongs to backgournd else, if x i belongs to anomaly (7) where a i,j is the abundance value of x i corresponding to the endmember m j . For the HSI data matrix X, it can be represented as: where A background " `a1 , . . ., a q ˘P R mnˆq , E background " `e1 , . . ., e q ˘P R pˆq , A anomaly " `aq`1 , . . ., a r ˘P R mnˆpr´q`1q and E anomaly " `eq`1 , . . ., e r ˘P R pˆpr´q`1q .The first part of the above formula is a low-rank matrix and as the anomalies have small occupations, the second part is sparse.Equation ( 8) has a similar formation as the LRR model as shown in Equation (3).Therefore, the LRR model is suitable for HSI.The sparse matrix contains most information of anomalies and can be used for anomaly detection. Dictionary Learning Method Based on the theory above, the LRR model can be well applied to hyperspectral images.In previous work [37], the original data matrix X is used as the dictionary D. Satisfactory results are obtained when the tradeoff parameter is well chosen.However, it bears several problems: (1) the size of Z is mn ˆmn.With a large HSI (large m and n), the minimizer Z will be an extremely large matrix, thus requiring a large memory and time assumption to calculate the decomposition result; and (2) LRR model is quite sensitive to the tradeoff parameter λ.Although an empirical setting of λ is given, it is still not optimal in most cases.Based on the LMM model of HSI and theory of LRR, a better dictionary should compose the multiple subspaces and be of limited size.A dictionary matrix whose rows stand for the spectra of endmembers is qualified.In order to better depart the sparse matrix from the original data, the spectra of anomalies should be excluded from the dictionary to fulfill anomaly detection.In this paper, a dictionary learning method [39] based on a random selection procedure is adopted. The model of learning method suggests that for every sample x, it has the relation: where D is the dictionary matrix, ν is small Gaussian white residual, and α is the corresponding sparse vector and it can be obtained by a sparse coding method: α " arg min ||x ´Dα|| 2 `γ||α|| 1 (10) where γ is a scalar parameter trading off between sparsity and approximated accuracy.After obtaining the sparse vector, the dictionary can be approximated with a gradient method: where µ is the step size and M is the number of hyperspectral samples. Traditional learning method aimed to find an over-complete dictionary with the whole image, making the learning process very slow.To overcome this problem, a random selection method is introduced into the iteration.We first initiate a dictionary with normalized random positive values, and then choose M samples from the original HSI.After using sparse coding to obtain the sparse vectors of the M samples, the dictionary is updated using Equation (11).Normalize the dictionary to avoid trivial solutions.Again randomly choose M samples afterwards to update the dictionary in the next iteration until it converges.The algorithm is described as follows: End while After adding the random selection procedure, the time consumption of iteration is greatly reduced.Meanwhile, as the anomalies only have a small probability of occurrence, it is less likely to be chosen in the iteration and therefore is less likely to be learned well.As a result, there is no need to exclude the anomalies before the learning process.It is notable that unlike other over-compete learned dictionaries such as KSVD [41], the elements of the learned dictionary based on random selection can well represent the spectra of major materials and the accurate spectra of materials with a small occupation such as anomalies will not be learned. Framework of the Proposed Method The framework of the proposed method is illustrated in Figure 1.The proposed method is based on LRR model and the main idea is to depart the anomaly matrix from the original HSI.For a better and more stable result, a learned dictionary based on sparse coding is adopted to the LRR model.The main steps of the proposed method are described as follows: Step 1: Rearrange the 3-Dimensional hyperspectral image in to a 2-Dimensional matrix X. Step 2: Learn a dictionary D which represents the background spectra from the input HSI data using Algorithm 2. Step 3: By adopting the LRR model and the learned dictionary D, decompose X into a low-rank background matrix L and a sparse anomaly matrix S with inexact ALM using Algorithm 1. Step 4: Apply basic detector on the sparse matrix S to get the detection result.Similar to the RPCA-based AD method [35], the simple and classical GRX method is applied afterwards in our experiments because the proposed method is insensitive to the selection of the basic detector. Experiments and Discussion In this section, we conduct our experiments on three hyperspectral images, one of which is used for simulated experiments to analyze the property of the proposed method, and the other two are used for real data experiment to demonstrate its effectiveness.It is notable that these three data sets are obtained after preprocessing such as atmospheric correlation, and widely used for hyperspectral AD problem.Two-Dimensional display of the detection result, the receiver operating characteristic (ROC) [42] and the area under ROC curves (AUC) are the main criteria to evaluate the detection results. Synthetic Data Experiments Synthetic data experiments are conducted in this subsection.The effect of parameter choices on simulated experiment results of the proposed method is analyzed.We will then compare the proposed LRRaLD with the benchmark GRX method [8], the state-of-the-art CRD [28], and other detectors based on LRMD including RPCA [35], LRaSMD [36] and LRASR [38] to illustrate its efficiency. Synthetic Data Description A hyperspectral image, acquired by the HyMap airborne hyperspectral imaging sensor [43] is used in this subsection.The image data set, covering one area of Cooke City, MT, USA, was collected on 4 July 2006, with the spatial size of 200 × 800 and 126 spectral bands spanning the wavelength interval of 400-2500 nm.The spectral channels around the wavelengths of 1320-1410 and 1800-1980 nm are the water-absorption bands and have been ignored in this experiment.Each pixel has approximately 3 m of ground resolution.Seven types of target, including four fabric panel targets and three vehicle targets, were deployed in the region of interest and three of them, shown in Table 1, are used as buried anomalies in our simulated experiments.Figure 2 shows spectra of anomalies and background samples.It is notable that among them, F1 and F2 are full size pixels and V1 is a subpixel due to the ground resolution.Two sub-images of size 150 × 150 are cropped as depicted in red square frames in Figure 3.One sub-image has a relatively simple background and another one has a complex background.Effectiveness of the proposed method is evaluated on these two kinds of background.In the following simulated experiments, to better approach the real environment, 25 random locations with different abundance fractions f (ranging from 0.04 to 1) of the specific anomaly spectrum t are buried in the background pixel with spectrum b on both two sub-images respectively, Experiments and Discussion In this section, we conduct our experiments on three hyperspectral images, one of which is used for simulated experiments to analyze the property of the proposed method, and the other two are used for real data experiment to demonstrate its effectiveness.It is notable that these three data sets are obtained after preprocessing such as atmospheric correlation, and widely used for hyperspectral AD problem.Two-Dimensional display of the detection result, the receiver operating characteristic (ROC) [42] and the area under ROC curves (AUC) are the main criteria to evaluate the detection results. Synthetic Data Experiments Synthetic data experiments are conducted in this subsection.The effect of parameter choices on simulated experiment results of the proposed method is analyzed.We will then compare the proposed LRRaLD with the benchmark GRX method [8], the state-of-the-art CRD [28], and other detectors based on LRMD including RPCA [35], LRaSMD [36] and LRASR [38] to illustrate its efficiency. Synthetic Data Description A hyperspectral image, acquired by the HyMap airborne hyperspectral imaging sensor [43] is used in this subsection.The image data set, covering one area of Cooke City, MT, USA, was collected on 4 July 2006, with the spatial size of 200 ˆ800 and 126 spectral bands spanning the wavelength interval of 400-2500 nm.The spectral channels around the wavelengths of 1320-1410 and 1800-1980 nm are the water-absorption bands and have been ignored in this experiment.Each pixel has approximately 3 m of ground resolution.Seven types of target, including four fabric panel targets and three vehicle targets, were deployed in the region of interest and three of them, shown in Table 1, are used as buried anomalies in our simulated experiments.Figure 2 shows spectra of anomalies and background samples.It is notable that among them, F1 and F2 are full size pixels and V1 is a subpixel due to the ground resolution.Two sub-images of size 150 ˆ150 are cropped as depicted in red square frames in Figure 3.One sub-image has a relatively simple background and another one has a complex background.Effectiveness of the proposed method is evaluated on these two kinds of background.In the following simulated experiments, to better approach the real environment, 25 random locations with different abundance fractions f (ranging from 0.04 to 1) of the specific anomaly spectrum t are buried in the background pixel with spectrum b on both two sub-images respectively, as shown in Figure 4a,b as an example.Figure 4c is the corresponding ground truth.Simple linear mixed model is adopted for the buried pixels using the following formula: Remote Sens. 2016, 8, 289 8 of 18 as shown in Figure 4a,b as an example.Figure 4c is the corresponding ground truth.Simple linear mixed model is adopted for the buried pixels using the following formula: (1 ) Parameter Analysis The initial choices of different parameters are important for many algorithms.For example, in the CRD method, different sizes of windows are needed to achieve optimal results for different data sets.The rank of the low-rank matrix and the level of sparsity are required in the LRaSMD method, in which the best setting of these parameters are often difficult to grasp.In the RPCA, the tradeoff parameter creates a balance between the low-rank matrix and the sparse matrix, and is crucial to the successfulness of the decomposition algorithm.In the LRASR, the tradeoff parameter and a sparse constrain of the low-rank representation matrix is required.In the proposed method, the tradeoff parameter λ and the number N of elements of the learned dictionary are also important.Figure 4c is the corresponding ground truth.Simple linear mixed model is adopted for the buried pixels using the following formula: (1 ) Parameter Analysis The initial choices of different parameters are important for many algorithms.For example, in the CRD method, different sizes of windows are needed to achieve optimal results for different data sets.The rank of the low-rank matrix and the level of sparsity are required in the LRaSMD method, in which the best setting of these parameters are often difficult to grasp.In the RPCA, the tradeoff parameter creates a balance between the low-rank matrix and the sparse matrix, and is crucial to the successfulness of the decomposition algorithm.In the LRASR, the tradeoff parameter and a sparse constrain of the low-rank representation matrix is required.In the proposed method, the tradeoff parameter λ and the number N of elements of the learned dictionary are also important.Figure 4c is the corresponding ground truth.Simple linear mixed model is adopted for the buried pixels using the following formula: (1 ) Parameter Analysis The initial choices of different parameters are important for many algorithms.For example, in the CRD method, different sizes of windows are needed to achieve optimal results for different data sets.The rank of the low-rank matrix and the level of sparsity are required in the LRaSMD method, in which the best setting of these parameters are often difficult to grasp.In the RPCA, the tradeoff parameter creates a balance between the low-rank matrix and the sparse matrix, and is crucial to the successfulness of the decomposition algorithm.In the LRASR, the tradeoff parameter and a sparse constrain of the low-rank representation matrix is required.In the proposed method, the tradeoff parameter λ and the number N of elements of the learned dictionary are also important. Parameter Analysis The initial choices of different parameters are important for many algorithms.For example, in the CRD method, different sizes of windows are needed to achieve optimal results for different data sets.The rank of the low-rank matrix and the level of sparsity are required in the LRaSMD method, in which the best setting of these parameters are often difficult to grasp.In the RPCA, the tradeoff parameter creates a balance between the low-rank matrix and the sparse matrix, and is crucial to the successfulness of the decomposition algorithm.In the LRASR, the tradeoff parameter and a sparse constrain of the low-rank representation matrix is required.In the proposed method, the tradeoff parameter λ and the number N of elements of the learned dictionary are also important. To evaluate the robustness of the proposed LRRaLD method to its parameters, the AUCs are calculated under different λ and N with two synthetic data sets, using spectra of F1, F2 and V1 as anomalies, respectively, as shown in Figures 5 and 6.Experiments are repeated 20 times to reduce the effect acquired by random positions to obtain results of the average AUC.As shown in Figures 5 and 6 the average AUC formed with these two parameters represents a flat surface-like shape, all except for Figure 6c, which is the anomaly spectrum of an artificially introduced car on complex background.This may be a result of interference caused by similar spectra of other cars originated from the complicated background.It can also be seen from Figure 6c that for the complex background, the detection result is better when N is set as a larger number.This is because more background information can be learned in the complex background when N is large.Even under such conditions, results are satisfactory, confirmed by AUC > 0.9.This experiment illustrates the robustness of the proposed method in two aspects: (1) the learned dictionary, even at a small size, contains enough spectra of background to enable the acquisition of satisfactory experimental results; and (2) the proposed method is robust to the tradeoff parameter λ.To evaluate the robustness of the proposed LRRaLD method to its parameters, the AUCs are calculated under different λ and N with two synthetic data sets, using spectra of F1, F2 and V1 as anomalies, respectively, as shown in Figures 5 and 6.Experiments are repeated 20 times to reduce the effect acquired by random positions to obtain results of the average AUC.As shown in Figures 5 and 6, the average AUC formed with these two parameters represents a flat surface-like shape, all except for Figure 6c, which is the anomaly spectrum of an artificially introduced car on complex background.This may be a result of interference caused by similar spectra of other cars originated from the complicated background.It can also be from Figure 6c that for the complex background, the detection result is better when N is set as a larger number.This is because more background information can be learned in the complex background when N is large.Even under such conditions, results are satisfactory, confirmed by AUC > 0.9.This experiment illustrates the robustness of the proposed method in two aspects: (1) the learned dictionary, even at a small size, contains enough spectra of background to enable the acquisition of satisfactory experimental results; and (2) the proposed method is robust to the tradeoff parameter λ.To better illustrate the effectiveness of LD, we fix N of LD at 30 and compare the result of LRR using different dictionaries: (1) LD; (2) the whole data matrix as the dictionary; and (3) the dictionary used in [38].The dictionary used in [38] is constructed with the k-means method and aims at representing the background spectra.With the recommended setting as [38], 300 samples from the image are chosen to construct the background dictionary.The AUCs using these dictionaries under different λ are shown in Figures 7 and 8.The original LRR method uses the whole matrix as its dictionary, in which a large λ causes a larger rank for the low-rank matrix, while a small λ results in a larger sparsity level for the sparse matrix, both of which enables possible degrade of detection results.This is due to the fact that its dictionary includes the spectra of anomalies.As a result, the original LRR method is sensitive to the tradeoff parameter λ.With regards to the anomalies, when adopting LD, which mainly represents the spectra of background and mitigates the contamination problem, large residuals can still be preserved in the sparse matrix.Better results are obtained with LRR using LD than LRR using the dictionary in [38].This may be because the background spectra of higher accuracy are obtained through the learning procedure.Two different trends of LRR using LD To evaluate the robustness of the proposed LRRaLD method to its parameters, the AUCs are calculated under different λ and N with two synthetic data sets, using spectra of F1, F2 and V1 as anomalies, respectively, as shown in Figures 5 and 6.Experiments are repeated 20 times to reduce the effect acquired by random positions to obtain results of the average AUC.As shown in Figures 5 and 6, the average AUC formed with these two parameters represents a flat surface-like shape, all except for Figure 6c, which is the anomaly spectrum of an artificially introduced car on complex background.This may be a result of interference caused by similar spectra of other cars originated from the complicated background.It can also be seen from Figure 6c that for the complex background, the detection result is better when N is set as a larger number.This is because more background information can be learned in the complex background when N is large.Even under such conditions, results are satisfactory, confirmed by AUC > 0.9.This experiment illustrates the robustness of the proposed method in two aspects: (1) the learned dictionary, even at a small size, contains enough spectra of background to enable the acquisition of satisfactory experimental results; and (2) the proposed method is robust to the tradeoff parameter λ.To better illustrate the effectiveness of LD, we fix N of LD at 30 and compare the result of LRR using different dictionaries: (1) LD; (2) the whole data matrix as the dictionary; and (3) the dictionary used in [38].The dictionary used in [38] is constructed with the k-means method and aims at representing the background spectra.With the recommended setting as [38], 300 samples from the image are chosen to construct the background dictionary.The AUCs using these dictionaries under different λ are shown in Figures 7 and 8.The original LRR method uses the whole matrix as its dictionary, in which a large λ causes a larger rank for the low-rank matrix, while a small λ results in a larger sparsity level for the sparse matrix, both of which enables possible degrade of detection results.This is due to the fact that its dictionary includes the spectra of anomalies.As a result, the original LRR method is sensitive to the tradeoff parameter λ.With regards to the anomalies, when adopting LD, which mainly represents the spectra of background and mitigates the contamination problem, large residuals can still be preserved in the sparse matrix.Better results are obtained with LRR using LD than LRR using the dictionary in [38].This may be because the background spectra of higher accuracy are obtained through the learning procedure.Two different trends of LRR using LD To better illustrate the effectiveness of LD, we fix N of LD at 30 and compare the result of LRR using different dictionaries: (1) LD; (2) the whole data matrix as the dictionary; and (3) the dictionary used in [38].The dictionary used in [38] is constructed with the k-means method and aims at representing the background spectra.With the recommended setting as [38], 300 samples from the image are chosen to construct the background dictionary.The AUCs using these dictionaries under different λ are shown in Figures 7 and 8.The original LRR method uses the whole matrix as its dictionary, in which a large λ causes a larger rank for the low-rank matrix, while a small λ results in a larger sparsity level for the sparse matrix, both of which enables possible degrade of detection results.This is due to the fact that its dictionary includes the spectra of anomalies.As a result, the original LRR method is sensitive to the tradeoff parameter λ.With regards to the anomalies, when adopting LD, which mainly represents the spectra of background and mitigates the contamination problem, large residuals can still be preserved in the sparse matrix.Better results are obtained with LRR using LD than LRR using the dictionary in [38].This may be because the background spectra of higher accuracy are obtained through the learning procedure.Two different trends of LRR using LD as dictionary can be viewed in Figure 7a,c.This is might because when using F1 and F2 as anomalies, the contamination problem is not totally eliminated for LD as well as the dictionary used in [38].On the contrary, the spectrum of V1, as shown in Figure 2, is the spectrum of a subpixel, which makes its spectrum harder to obtain in the learning process.As a result, more stable result is obtained when using V1 as anomalies.Overall, the results in Figures 7 and 8 show that LRR using LD is more robust and has a better performance. the contrary, the spectrum of V1, as shown in Figure 2, is the spectrum of a subpixel, which makes its spectrum harder to obtain in the learning process.As a result, more stable result is obtained when using V1 as anomalies.Overall, the results in Figures 7 and 8 show that LRR using LD is more robust and has a better performance. Meanwhile, given that LD is of small size, less computation of the LRMD procedure of the proposed method is required compared to the original LRR method.Under an 8-core Intel Xeon E5504 with 24 GB of DDR3 RAM, it costs 26.08 s for the proposed method and 88.34 s for the original LRR method in the decomposition and the basic GRX procedures when using F1 as anomalies on a simple background.However, an additional computation of learning process is required for the proposed method, in which the main calculation is obtaining the sparse vectors.By using the matlab toolbox spams [44], the computational time is greatly reduced, costing 10.87 s to learn a dictionary.The total execution time of the proposed method is 36.95s, still far less than the original LRR method.In later experiments, λ is fixed at 1 and N is fixed at 30 for the proposed LRRaLD method.The execution time of the learning process is also included for the proposed method. Detection Performance In this subsection, we compare the proposed method with GRX, CRD, RPCA, LRaSMD and LRASR.The spectra of F1, F2 and V1 are used as spectra of anomalies for the simulated experiment respectively.The optimal parameters used for other methods are as follows.For the CRD method, we set the size of the inner window at 11 × 11 and the size of the outer window at 15 × 15.For the LRaSMD method, we set the rank of the low-rank matrix at 8 and the sparsity level at 0.3.For the RPCA method, tradeoff parameters are tried and the optimal results are chosen for different situations due to the sensitivity of the algorithm.For the LRASR method, the sparse constrain of the low-rank representation matrix is set at 0.1, and the tradeoff parameter is set at 10 for the simple background and 0.1 for the complex background for its best performance. The 2-Dimensional display of the detection results and its binary result of simple and complex backgrounds with F1 randomly implanted as an example are shown in Figures 9 and 10 AUC LRR using LD as dictionary LRR using itself as dictionary LRR using dictionary in [38] the contrary, the spectrum of V1, as shown in Figure 2, is the spectrum of a subpixel, which makes its spectrum harder to obtain in the learning process.As a result, more stable result is obtained when using V1 as anomalies.Overall, the results in Figures 7 and 8 show that LRR using LD is more robust and has a better performance. Meanwhile, given that LD is of small size, less computation of the LRMD procedure of the proposed method is required compared to the original LRR method.Under an 8-core Intel Xeon E5504 with 24 GB of DDR3 RAM, it costs 26.08 s for the proposed method and 88.34 s for the original LRR method in the decomposition and the basic GRX procedures when using F1 as anomalies on a simple background.However, an additional computation of learning process is required for the proposed method, in which the main calculation is obtaining the sparse vectors.By using the matlab toolbox spams [44], the computational time is greatly reduced, costing 10.87 s to learn a dictionary.The total execution time of the proposed method is 36.95s, still far less than the original LRR method.In later experiments, λ is fixed at 1 and N is fixed at 30 for the proposed LRRaLD method.The execution time of the learning process is also included for the proposed method. Detection Performance In this subsection, we compare the proposed method with GRX, CRD, RPCA, LRaSMD and LRASR.The spectra of F1, F2 and V1 are used as spectra of anomalies for the simulated experiment respectively.The optimal parameters used for other methods are as follows.For the CRD method, we set the size of the inner window at 11 × 11 and the size of the outer window at 15 × 15.For the LRaSMD method, we set the rank of the low-rank matrix at 8 and the sparsity level at 0.3.For the RPCA method, different tradeoff parameters are tried and the optimal results are chosen for different situations due to the sensitivity of the algorithm.For the LRASR method, the sparse constrain of the low-rank representation matrix is set at 0.1, and the tradeoff parameter is set at 10 for the simple background and 0.1 for the complex background for its best performance. The 2-Dimensional display of the detection results and its binary result of simple and complex backgrounds with F1 randomly implanted as an example are shown in Figures 9 and 10 AUC LRR using LD as dictionary LRR using itself as dictionary LRR using dictionary in [38] Meanwhile, given that LD is of small size, less computation of the LRMD procedure of the proposed method is required compared to the original LRR method.Under an 8-core Intel Xeon E5504 with 24 GB of DDR3 RAM, it costs 26.08 s for the proposed method and 88.34 s for the original LRR method in the decomposition and the basic GRX procedures when using F1 as anomalies on a simple background.However, an additional computation of learning process is required for the proposed method, in which the main calculation is obtaining the sparse vectors.By using the matlab toolbox spams [44], the computational time is greatly reduced, costing 10.87 s to learn a dictionary.The total execution time of the proposed method is 36.95s, still far less than the original LRR method.In later experiments, λ is fixed at 1 and N is fixed at 30 for the proposed LRRaLD method.The execution time of the learning process is also included for the proposed method. Detection Performance In this subsection, we compare the proposed method with GRX, CRD, RPCA, LRaSMD and LRASR.The spectra of F1, F2 and V1 are used as spectra of anomalies for the simulated experiment respectively.The optimal parameters used for other methods are as follows.For the CRD method, we set the size of the inner window at 11 ˆ11 and the size of the outer window at 15 ˆ15.For the LRaSMD method, we set the rank of the low-rank matrix at 8 and the sparsity level at 0.3.For the RPCA method, different tradeoff parameters are tried and the optimal results are chosen for different situations due to the sensitivity of the algorithm.For the LRASR method, the sparse constrain of the low-rank representation matrix is set at 0.1, and the tradeoff parameter is set at 10 for the simple background and 0.1 for the complex background for its best performance. The 2-Dimensional display of the detection results and its binary result of simple and complex backgrounds with F1 randomly implanted as an example are shown in Figures 9 and 10 respectively.The binary results are obtained with the probability of false alarm rate (PFA) equals 10 ´3.The more pixels detected in the binary display of detection result, the better.The corresponding ROC curves are shown in Figures 11 and 12 respectively.As shown, the proposed LRRaLD method detects the most anomalies among the five methods.It also has the best ROC curve.CRD and LRASR have the following performance both on simple and complex backgrounds.That might be the result of the ability of CRD to exploit local information, and LRASR also exploits the structure information of HSI.On the contrary, the results of RPCA and LRaSMD are poor on the complex background used in the synthetic data experiments.Table 2 shows the average detection rates of 20 repeated experiments when PFA equals 0.01.For the proposed LRRaLD method, the average detection rate using spectrum of F1 as anomalies is 0.946 under simple background, which means only one or two anomalous pixels are missed.The anomalies with anomalous fraction higher than 0.1 can be well detected.The proposed LRRaLD has the highest detection rates and LRASR has the second performance.RPCA and LRaSMD have poor results on complex background, which might be because their models are drawn from a single subspace and cannot handle a complex background.CRD has good results on complex background.This may be because it utilizes local information.However, it has the largest variances among these five methods due to the effect of random locations.In general, the results using the spectra of F1 and F2 as anomalies are better than the results using the spectrum of V1 as anomalies.This may be because the spectra of F1 and F2 are spectra of full pixels and can be better detected compare to the spectrum of sub-pixel V1. 2 shows the average detection rates of 20 repeated experiments when PFA equals 0.01.For the proposed LRRaLD method, the average detection rate using spectrum of F1 as anomalies is 0.946 under simple background, which means only one or two anomalous pixels are missed.The anomalies with anomalous fraction higher than 0.1 can be well detected.The proposed LRRaLD has the highest detection rates and LRASR has the second performance.RPCA and LRaSMD have poor results on complex background, which might be because their models are drawn from a single subspace and cannot handle a complex background.CRD has good results on complex background.This may be because it utilizes local information.However, it has the largest variances among these five methods due to the effect of random locations.In general, the results using the spectra of F1 and F2 as anomalies are better than the results using the spectrum of V1 as anomalies.This may be because the spectra of F1 and F2 are spectra of full pixels and can be better detected compare to the spectrum of sub-pixel V1. 2 shows the average detection rates of 20 repeated experiments when PFA equals 0.01.For the proposed LRRaLD method, the average detection rate using spectrum of F1 as anomalies is 0.946 under simple background, which means only one or two anomalous pixels are missed.The anomalies with anomalous fraction higher than 0.1 can be well detected.The proposed LRRaLD has the highest detection rates and LRASR has the second performance.RPCA and LRaSMD have poor results on complex background, which might be because their models are drawn from a single subspace and cannot handle a complex background.CRD has good results on complex background.This may be because it utilizes local information.However, it has the largest variances among these five methods due to the effect of random locations.In general, the results using the spectra of F1 and F2 as anomalies are better than the results using the spectrum of V1 as anomalies.This may be because the spectra of F1 and F2 are spectra of full pixels and can be better detected compare to the spectrum of sub-pixel V1. 3 is the execution time of the different methods.The proposed method has a less execution time than CRD, in which the inverse function of local background requires heavy computational cost.It is also notable that the RPCA is run with C code, which might accelerate its efficiency. Real Data Experiments In this section, two widely used real HSI data sets, which contain ground-truth, are applied to evaluate the performance of the proposed method.GRX, CRD, RPCA, LRaSMD and LRASR are used as comparisons. Real Data Sets Description One real data set was acquired by the Hyperion imaging sensor [45], which has 242 bands with a spectral resolution of 10 nm over 357-2576 nm.The image data set was collected in 2008, covering an agricultural area of the State of Indiana, USA.After removal of the low-SNR bands and the uncalibrated bands, 149 bands are used.The 150 × 150 sub-image with the ground truth of the anomalies is used.Where the anomalies come from the storage silo and the roof, these anomalies, 3 is the execution time of the different methods.The proposed method has a less execution time than CRD, in which the inverse function of local background requires heavy computational cost.It is also notable that the RPCA is run with C code, which might accelerate its efficiency. Real Data Experiments In this section, two widely used real HSI data sets, which contain ground-truth, are applied to evaluate the performance of the proposed method.GRX, CRD, RPCA, LRaSMD and LRASR are used as comparisons. Real Data Sets Description One real data set was acquired by the Hyperion imaging sensor [45], which has 242 bands with a spectral resolution of 10 nm over 357-2576 nm.The image data set was collected in 2008, covering an agricultural area of the State of Indiana, USA.After removal of the low-SNR bands and the uncalibrated bands, 149 bands are used.The 150 × 150 sub-image with the ground truth of the anomalies is used.Where the anomalies come from the storage silo and the roof, these anomalies, Table 3 is the execution time of the different methods.The proposed method has a less execution time than CRD, in which the inverse function of local background requires heavy computational cost.It is also notable that the RPCA is run with C code, which might accelerate its efficiency. Real Data Experiments In this section, two widely used real HSI data sets, which contain ground-truth, are applied to evaluate the performance of the proposed method.GRX, CRD, RPCA, LRaSMD and LRASR are used as comparisons. Real Data Sets Description One real data set was acquired by the Hyperion imaging sensor [45], which has 242 bands with a spectral resolution of 10 nm over 357-2576 nm.The image data set was collected in 2008, covering an agricultural area of the State of Indiana, USA.After removal of the low-SNR bands and the uncalibrated bands, 149 bands are used.The 150 ˆ150 sub-image with the ground truth of the anomalies is used.Where the anomalies come from the storage silo and the roof, these anomalies, especially the storage silo, are not visually distinguishable from the background.The scene and the ground truth of anomalies are shown in Figure 13.The other one was collected by HYDICE airborne sensor and can be downloaded from the website of the Army GeoSpatial Center (www.agc.army.mil/hypercube/).The spatial resolution is approximately 1 m, and the whole data set has a size of 307 × 307.However, only the upper right of the scene with a size of 80 × 100 pixels, which is displayed with a red square frame in Figure 14a, has a definite ground truth for anomaly detection.This cropped area was widely used for hyperspectral anomaly detection [28,45] and is utilized for our real data experiment.Here, 175 bands remain after removal of water vapor absorption bands.There are approximately 21 anomalous pixels, representing cars and roof.The scene and its corresponding ground-truth map of anomalies are illustrated in Figure 14b,c Detection Performance In this subsection, we compare the proposed method with GRX, CRD, RPCA, LRaSMD and LRASR on real data sets.As mentioned in Section 4.1.2,we fix the tradeoff parameter λ at 1 and the number N of elements of LD at 30 for the proposed LRRaLD method.On the Hyperion data set, the optimal parameters used for other methods are as follows.For the CRD method, we set the size of the inner window at 11 × 11 and the size of the outer window at 15 × 15.For the LRaSMD method, we set the rank of the low-rank matrix at 8 and the sparsity level at 0.3.For the RPCA method, the tradeoff parameter is set at 0.02.For the LRASR method, we set the sparse constrain of the low-rank representation at 0.1 and the tradeoff parameter at 5. On the HYDICE urban data set, the optimal parameters used for other methods are as follows.For the CRD method, we set the size of the inner window at 7 × 7 and the size of the outer window at 15 × 15, the same optimal parameters as used in [28].For the RPCA method, the tradeoff parameter is set at 0.015.For the LRaSMD and LRASR methods, same settings as the Hyperion data set are adopted.The other one was collected by HYDICE airborne sensor and can be downloaded from the website of the Army GeoSpatial Center (www.agc.army.mil/hypercube/).The spatial resolution is approximately 1 m, and the whole data set has a size of 307 ˆ307.However, only the upper right of the scene with a size of 80 ˆ100 pixels, which is displayed with a red square frame in Figure 14a, has a definite ground truth for anomaly detection.This cropped area was widely used for hyperspectral anomaly detection [28,45] and is utilized for our real data experiment.Here, 175 bands remain after removal of water vapor absorption bands.There are approximately 21 anomalous pixels, representing cars and roof.The scene and its corresponding ground-truth map of anomalies are illustrated in Figure 14b,c.The other one was collected by HYDICE airborne sensor and can be downloaded from the website of the Army GeoSpatial Center (www.agc.army.mil/hypercube/).The spatial resolution is approximately 1 m, and the whole data set has a size of 307 × 307.However, only the upper right of the scene with a size of 80 × 100 pixels, which is displayed with a red square frame in Figure 14a, has a definite ground truth for anomaly detection.This cropped area was widely used for hyperspectral anomaly detection [28,45] and is utilized for our real data experiment.Here, 175 bands remain after removal of water vapor absorption bands.There are approximately 21 anomalous pixels, representing cars and roof.The scene and its corresponding ground-truth map of anomalies are illustrated in Figure 14b,c Detection Performance In this subsection, we compare the proposed method with GRX, CRD, RPCA, LRaSMD and LRASR on real data sets.As mentioned in Section 4.1.2,we fix the tradeoff parameter λ at 1 and the number N of elements of LD at 30 for the proposed LRRaLD method.On the Hyperion data set, the optimal parameters used for other methods are as follows.For the CRD method, we set the size of the inner window at 11 × 11 and the size of the outer window at 15 × 15.For the LRaSMD method, we set the rank of the low-rank matrix at 8 and the sparsity level at 0.3.For the RPCA method, the tradeoff parameter is set at 0.02.For the LRASR method, we set the sparse constrain of the low-rank representation at 0.1 and the tradeoff parameter at 5. On the HYDICE urban data set, the optimal parameters used for other methods are as follows.For the CRD method, we set the size of the inner window at 7 × 7 and the size of the outer window at 15 × 15, the same optimal parameters as used in [28].For the RPCA method, the tradeoff parameter is set at 0.015.For the LRaSMD and LRASR methods, same settings as the Hyperion data set are adopted. Detection Performance In this subsection, we compare the proposed method with GRX, CRD, RPCA, LRaSMD and LRASR on real data sets.As mentioned in Section 4.1.2,we fix the tradeoff parameter λ at 1 and the number N of elements of LD at 30 for the proposed LRRaLD method.On the Hyperion data set, the optimal parameters used for other methods are as follows.For the CRD method, we set the size of the inner window at 11 ˆ11 and the size of the outer window at 15 ˆ15.For the LRaSMD method, we set the rank of the low-rank matrix at 8 and the sparsity level at 0.3.For the RPCA method, the tradeoff parameter is set at 0.02.For the LRASR method, we set the sparse constrain of the low-rank representation at 0.1 and the tradeoff parameter at 5. On the HYDICE urban data set, the optimal parameters used for other methods are as follows.For the CRD method, we set the size of the inner window at 7 ˆ7 and the size of the outer window at 15 ˆ15, the same optimal parameters as used in [28].For the RPCA method, the tradeoff parameter is set at 0.015.For the LRaSMD and LRASR methods, same settings as the Hyperion data set are adopted.4 illustrates the AUC and execution time of different methods.The proposed LRRaLD method outperforms other methods.When using the HYDICE data set, the proposed method has alike performance as other methods at the low FAR area.This is mainly because the stripped noises of the HYDICE data set have a relatively high magnitude.The LRMD methods can separate the stripped noises from the original image.However, the information of noise will remain in the sparse matrix, which makes the FAR high and affect the detection results.Figure 18 shows the statistic of detection values for the AD algorithm.Each group has a green box representing the anomalous pixels and a blue box representing the background box.The upper and low edges of the box are 90th and 10th percentiles, respectively.The proposed LRRaLD method has the best performance.When using the HYDICE data set, the proposed method has alike performance as other methods at the low FAR area.This is mainly because the stripped noises of the HYDICE data set have a relatively high magnitude.The LRMD methods can separate the stripped noises from the original image.However, the information of noise will remain in the sparse matrix, which makes the FAR high and affect the detection results.When using the HYDICE data set, the proposed method has alike performance as other methods at the low FAR area.This is mainly because the stripped noises of the HYDICE data set have a relatively high magnitude.The LRMD methods can separate the stripped noises from the original image.However, the information of noise will remain in the sparse matrix, which makes the FAR high and affect the detection results.When using the HYDICE data set, the proposed method has alike performance as other methods at the low FAR area.This is mainly because the stripped noises of the HYDICE data set have a relatively high magnitude.The LRMD methods can separate the stripped noises from the original image.However, the information of noise will remain in the sparse matrix, which makes the FAR high and affect the detection results.Both experiments on synthetic and real data sets illustrate the advantage of the proposed LRRaLD method.To summarize, the proposed LRRaLD method features as follows: 1) Effectiveness: based on LRR, the intrinsic low-rank property of HSI can be well exploited. Compared to other AD algorithms, the model of LRR fits the LMM model better, resulting in a higher detection rate.2) Robustness: using LD as the dictionary of LRR algorithm, a better separation of anomaly and background can be achieved, which makes the proposed method more robust to the tradeoff parameter as a result.The experimental results show that the proposed method is more robust to its parameters.3) Efficiency: with the small size of LD, less computation cost is required, making the procedure of LRR much faster and more efficient.The experimental results show that the computational cost of the proposed method is in a reasonable range. Conclusions In this study, a new anomaly detector based on LRR and LD for hyperspectral imagery is proposed.The intrinsic low-rank property of hyperspectral imagery is exploited using the LRR decomposition method.Meanwhile, a dictionary learning method is adopted to achieve a relatively pure background dictionary, which makes the LRR decomposition procedure both more robust to the tradeoff parameter and cost less computation time.Finally, the basic detector is used to obtain the detection result. It is demonstrated in the experiments of simulated and real data that the proposed method has excellent performance.However, the model of LRR only considers anomalies with large magnitude.The influence of Gaussian noise should also be considered, which will be the focus of our future work.Both experiments on synthetic and real data sets illustrate the advantage of the proposed LRRaLD method.To summarize, the proposed LRRaLD method features as follows: 1) Effectiveness: based on LRR, the intrinsic low-rank property of HSI can be well exploited. Compared to other AD algorithms, the model of LRR fits the LMM model better, resulting in a higher detection rate. 2) Robustness: using LD as the dictionary of LRR algorithm, a better separation of anomaly and background can be achieved, which makes the proposed method more robust to the tradeoff parameter as a result.The experimental results show that the proposed method is more robust to its parameters. 3) Efficiency: with the small size of LD, less computation cost is required, making the procedure of LRR much faster and more efficient.The experimental results show that the computational cost of the proposed method is in a reasonable range. Conclusions In this study, a new anomaly detector based on LRR and LD for hyperspectral imagery is proposed.The intrinsic low-rank property of hyperspectral imagery is exploited using the LRR decomposition method.Meanwhile, a dictionary learning method is adopted to achieve a relatively pure background dictionary, which makes the LRR decomposition procedure both more robust to the tradeoff parameter and cost less computation time.Finally, the basic detector is used to obtain the detection result. It is demonstrated in the experiments of simulated and real data that the proposed method has excellent performance.However, the model of LRR only considers anomalies with large magnitude.The influence of Gaussian noise should also be considered, which will be the focus of our future work. Figure 1 . Figure 1.Framework of the proposed method. Figure 1 . Figure 1.Framework of the proposed method. Figure 2 .Figure 3 . Figure 2. Spectra of anomalies and background samples Figure 2 . Figure 2. Spectra of anomalies and background samples. Figure 2 .Figure 3 . Figure 2. Spectra of anomalies and background samples Figure 2 .Figure 3 . Figure 2. Spectra of anomalies and background samples Figure 5 . Figure 5.The AUC surfaces of the detection results of LRRaLD with different parameters (λ and N) under simple background: (a) F1; (b) F2; and (c) V1. binary results are obtained with the probability of false alarm rate (PFA) equals 10 −3 .The more pixels detected in the binary display of detection result, the better.The corresponding ROC curves are shown in Figures11 and 12, respectively.As shown, the proposed LRRaLD method detects the most anomalies among the five methods.It also has the best ROC curve.CRD and LRASR have the following performance both on simple and complex backgrounds.That might be the result of the ability of CRD to exploit local information, and LRASR also exploits the structure information of HSI.On the contrary, the results of RPCA and LRaSMD are poor on the complex background used in the synthetic data experiments.Table Figure 9 . Figure 9. Detection results under simple background using different methods: (top) 2-D display; and (bottom) binary results when PFA equals 0.001. Figure 10 . Figure 10.Detection results under complex background using different methods: (top) 2-D display; and (bottom) binary results when PFA equals 0.001. Figures 15 and 16 are the detection results of Hyperion data set and HYDICE data set, respectively, Figure 17 is the corresponding ROC curves, and Table 4 illustrates the AUC and execution time of different methods.The proposed LRRaLD method outperforms other methods. Figures 15 and 16 are the detection results of Hyperion data set and HYDICE data set, respectively, Figure 17 is the corresponding ROC curves, and Table 4 illustrates the AUC and execution time of different methods.The proposed LRRaLD method outperforms other methods. Figures 15 and 16 Figures 15 and 16 are the detection results of Hyperion data set and HYDICE data set, respectively, Figure 17 is the corresponding ROC curves, and Table4illustrates the AUC and execution time of different methods.The proposed LRRaLD method outperforms other methods.When using the HYDICE data set, the proposed method has alike performance as other methods at the low FAR area.This is mainly because the stripped noises of the HYDICE data set have a relatively high magnitude.The LRMD methods can separate the stripped noises from the original image.However, the information of noise will remain in the sparse matrix, which makes the FAR high and affect the detection results.Figure18shows the statistic of detection values for the AD algorithm.Each group has a green box representing the anomalous pixels and a blue box representing the background box.The upper and low edges of the box are 90th and 10th percentiles, respectively.The proposed LRRaLD method has the best performance. Figure 17 . Figure 17.The ROC curves of the AD algorithms for the real data sets: (a) Hyperion data set; and (b) HYDICE data set. Figure 18 . Figure 18.Statistical separability analysis of the AD algorithms: (a) Hyperion data set; and (b) HYDICE data set. Acknowledgments: The authors would like to thank the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University for providing the Hyperion data set, and the Digital Imaging and Remote Sensing Group, Center for Imaging Science, Rochester Institute of Technology for the HyMap data set used in our experiments.The authors would also like to thank Zebin Wu from the Nanjing University of Science and Technology for sharing the code of LRASR, and the anonymous reviewers for the insightful comments and suggestions.This research was supported by the National Natural Science Foundation of China (Grant No. 61572133), and the Research fund for the State Key Laboratory of Earth Surface Processes and Resource Ecology under Grant 2015-KF-01. Figure 18 . Figure 18.Statistical separability analysis of the AD algorithms: (a) Hyperion data set; and (b) HYDICE data set. Table 1 . Characteristics of the implanted target spectra in our synthetic data experiments. Table 1 . Characteristics of the implanted target spectra in our synthetic data experiments. Table 3 . Execution time of different methods on synthetic data sets (20 repetitions). Table 3 . Execution time of different methods on synthetic data sets (20 repetitions). Table 3 . Execution time of different methods on synthetic data sets (20 repetitions). Table 4 . AUC and execution time of different methods. Table 4 . AUC and execution time of different methods.
14,789.6
2016-03-28T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Two-Dimensional Orthonormal Tree-Structured Haar Transform for Fast Block Matching : The goal of block matching (BM) is to locate small patches of an image that are similar to a given patch or template. This can be done either in the spatial domain or, more efficiently, in a transform domain. Full search (FS) BM is an accurate, but computationally expensive procedure. Recently introduced orthogonal Haar transform (OHT)-based BM method significantly reduces the computational complexity of FS method. However, it cannot be used in applications where the patch size is not a power of two. In this paper, we generalize OHT-based BM to an arbitrary patch size, introducing a new BM algorithm based on a 2D orthonormal tree-structured Haar transform (OTSHT). Basis images of OHT are uniquely determined from the full balanced binary tree, whereas various OTSHTs can be constructed from any binary tree. Computational complexity of BM depends on a specific design of OTSHT. We compare BM based on OTSHTs to FS and OHT (for restricted patch sizes) within the framework of image denoising, using WNNM as a denoiser. Experimental results on eight grayscale test images corrupted by additive white Gaussian noise with five noise levels demonstrate that WNNM with OTSHT-based BM outperforms other methods both computationally and qualitatively. Introduction Block matching (BM) is a fundamental method to locate small patches in an image that match a given patch which is referred to as a template. It has many practical applications, such as object detection [1], object tracking [2], image registration [3], and image analysis [4,5], to name few. Block matching requires the vast computations due to a large search space involving many potential candidates. Full search (FS) algorithm is generally most accurate BM, in which the similarity scores of all candidate windows to the template are calculated in a sliding window manner in the spatial domain. To speed up the matching procedure, various fast algorithms have been proposed. They can be classified into the following two main categories: full search equivalent and non full search equivalent algorithms. Full search equivalent algorithms accelerate the BM by pruning many candidate windows that cannot be the best match windows. These algorithms ensure the results of the full search algorithm. Conversely, non full search equivalent algorithms accelerate by limiting the scope of the search space or by using approximated patterns. The results of non full search equivalent algorithms may be different from those of the full search algorithms. Many full search equivalent algorithms have been proposed in literature, see e.g., [6,7]. The BM methods can be also categorized into spatial-and transform-based. Among transform-based methods, decompositions by the rectangular orthogonal bases, such as orthogonal Haar and Walsh, are most studied ones [8,9]. As it was demonstrated in [8], BM in orthogonal Haar transform (OHT) domain appears to be more efficient than BM based on Walsh-Hadamard transform (WHT) [9], Gray-Code kernels(GCK) [10], and incremental dissimilarity approximations (IDA) [7]. One of the reason behind this is the use of the integral image, technique originally proposed by Crow [11] and broadened later by Viola and Jones [12]. Once the integral image is generated, the sum of pixel intensities of a rectangle region in the image can be easily obtained by three operations (two subtractions and one addition), regardless of the size of a region. Thus, as it was demonstrated in [8], integral image can be a useful tool to calculate OHT coefficients, and OHT is efficient especially when the templates size is large. To evaluate the speed-up over FS equivalent methods, a template of size 2 n × 2 n (n ≥ 4) was considered, with the standard deviation of pixel intensities in the template greater than 45. In [13], it was reported that the algorithm based on OHT was faster than other algorithms, including low resolution pruning (LRP) [14], WHT [9], and fast Fourier transform (FFT). Despite the above mentioned benefits of OHT-based BM, it has the following drawback -the block size shall be a power of 2, restricting an applicability of OHT-based BM methods, e.g., in nonlocal image restoration [15][16][17][18][19], in which the size of patch is important for the restoration performance. Nonlocal image restoration methods use the fact that there exists a high level of self-similarity (fractal similarity) in natural images, and one can use this for collaborative processing of similar patches extracted from an image. Non-local image denoising method uses BM to collect similar patches and process them collaboratively. The denoising performance directly depends on patches collected in the image. One of the examples of such application is image denoising [15][16][17], where various size of templates (a regions centered at each pixel) are used depending on noise level. For example, non-local mean denoising [15], uses the template size is 7 × 7 for a moderate noise level; in weighted nuclear norm minimization denoising method [16], templates of sizes 6 × 6, 7 × 7, and 8 × 8 are used. In the present paper, we propose a specific design of the orthonormal tree-structured Haar transform (OTSHT) for fast BM with an arbitrary size. The one-dimensional OTSHT [20], proposed by one of the authors, has a freedom of the design and meets the requirements of fast BM. We present the mathematical expressions defining 2D OTSHT, construct several types of the two-dimensional OTSHTs including two prime tree structures, and evaluate them as FS equivalent algorithms in terms of speed and pruning performance. In addition, as a non FS equivalent algorithm, we demonstrate the applicability of the proposed OTSHT in the state-of-the-art image denoising. The obtained results demonstrate that the new method is faster and even produces slightly better PSNR than those where FS or OHT are used. This paper extends the results of the initial study, presented in [21,22]. The paper is organized as follows: We present the mathematical expression and concrete basis images of OTSHT for BM in Section 2. The fast BM algorithm using OTSHT is described in Section 3. Our evaluations of the specific designs of OTSHT for BM are detailed in Section 4. The application to image denoising is demonstrated in Section 5. Finally, in Section 6, we conclude our study. Basis Images of Two-Dimensional Orthonormal Tree-Structured Haar Transform for Fast Block Matching In this section, we consider the basis images of orthonormal Haar transform for fast BM with an arbitrary patch size. To do this, extending the OTSH transforms, introduced in [20], to the 2D and select two extreme cases of these transforms: one based on balanced-binary tree decompositions and the second one on the logarithmic tree decomposition. Binary Tree and Interval Subdivision Two-dimensional orthonormal tree-structured Haar transform is designed by an arbitrary binary tree having N leaves with d depth. In the binary tree, the topmost node is referred to as a root and the bottom nodes are referred to as the leaves. Each node is labeled by α. The labeling process starts from the root. The left and right children of the root are labeled as 0 and 1, respectively. When the node has two children, the left and right children are labeled by adding 0 and 1 to the right end of the precedent node label, respectively. Let α 0 and α 1 be the left and right children of the node α, respectively. Let ν(α) be the number of leaves that node α has. The interval, I α , of node α is defined from the structure of the binary tree. Intervals I root , I 0 , and I 1 are defined as Otherwise, for I α = [a, b), and Figure 1 shows the binary tree and the interval splitting. The tree has three leaves with depth two. A circle represents a node. The number above the circle is the label and the number in the circle is the number of the leaves that the node has. Orthonormal Tree-Structured Haar Transform Basis Images Label β is introduced for the vertical direction in addition to label α for the horizontal direction. A total of N 2 basis images of size N × N are generated from a binary tree having N leaves. There are four functions for constructing the basis images of OTSHT for BM. The function for regions (I root × I root ) is defined as which is used once to generate the first basis image. Otherwise, for region (I α × I β ), the following functions are used: The interval of the nodes of focus is used for generating the positive and negative value regions, where the region is decomposed according to the intervals in the horizontal and vertical directions. Figure 2 illustrates a set of procedures for decomposition of the region when the nodes of focus are (α, β). First, the region is divided by ϕ i . Next, the positive region in white is divided by ϕ 2 , Finally, the negative region in black is divided by ϕ 3 . This procedure is iterated until all space cannot be divided. A positive value and a negative value regions are represented in white and black, respectively. First, region (I α × I β ) is vertically divided into two regions (I α × I β 0 ) and (I α × I β 1 ), and the value at each region is assigned by (7). Then the positive value region (I α × I β 0 ) is horizontally divided into two regions (I α 0 × I β 0 ) and (I α 1 × I β 0 ) by (8), while the negative value region is divided into two regions (I α 0 × I β 1 ) and (I α 1 × I β 1 ) by (9). Balanced Binary Tree and Logarithmic Binary Tree Tree structured Haar transform has a freedom of the design. We consider two prime tree structures, balanced binary tree and logarithmic binary tree. They are extreme cases. The logarithmic binary tree is the special case of the Fibonacci p-tree [20] when p → ∞. Figure 5a shows an example of the logarithmic binary tree of the depth 4 having N = 5 leaves. Figure 5b shows the appearance of logarithmic binary tree-based (L-) OTSHT basis images generated by (6)- (9). In the set of 25 basis images, there are totally r = 15 rectangles with the different sizes: Appearance of generating the basis images. As we have seen, although the number of leaves is the same, the different structures are constructed, which leads to the different number of rectangles. The number of rectangles with different sizes affects the computational complexity. Relation between OHT and OTSHT The OHT is the special case of OTSHT, when the tree for constructing OHT is a full balanced binary tree. Figure 6a,b shows the full binary tree having four leaves and the appearance of generating OHT basis images, respectively. Fast Block Matching Algorithm Using Two-Dimensional Orthonormal Tree-Structured Haar Transform OTSHT is used for both FS-equivalent fast BM algorithm and non FS-equivalent one. In both algorithms, the similarity of all candidate patches to the template is calculated by SSD in the transform domain. Let x j be the column vector of the j-th window in a proper order. The k-th OTSHT coefficient, where h k is the column vector of the k-th OTSHT basis image in a proper order. In practice, since the elements of OTSHT basis images have +1 and −1, forming a rectangle region, the OTSHT coefficient is obtained by just few operations with the integral image [11,12]. Moreover, the strip sum technique reduces the number of operations [8]. FS-Equivalent Algorithm Using OTSHT The OTSHT can be used for FS-equivalent algorithm. The fast FS-equivalent algorithm using OHT [8] is applicable to OTSHT. The operations are significantly reduced by iterative pruning process described below. When an appropriate threshold is used, one may securely reject the windows with sum of squared differences (SSDs) above the threshold: If then the j-th window is rejected from the search, where X K j and X K t are the OTSHT coefficients including the first to K-th ones, i.e., X K j = [X j (1), X j (2), . . . , X j (K)] T , of the j-th window and the template, respectively. Once the window is rejected, neither the OTSHT coefficient nor the SSD of the window is calculated. For each iteration of k, the k-th OTSHT coefficient and the SSDs of the remaining windows are calculated. The iteration is performed until the number of remaining windows is small. Algorithm 1 shows the pseudo code for FS-equivalent algorithm using OTSHT. set the k-th OTSHT coefficient of x t to X t (k) 6: for each patch x j in x 7: if Flg j == 'true 8: set the k-th OTSHT coefficient of x j to X j (k) 9: if ||X K j − X K t || 2 2 ≥ threshold 10: Flg j = ' f alse 11: end 12: end 13: end 14: if the number of 'true in Flg is enough small 15: break 16: end 17: end 18: FS in remaining candidates Output: estimated window Non FS-Equivalent Algorithm Using OTSHT The OTSHT can be used for non-FS-equivalent algorithm. Instead of the iterative pruning process of the FS-equivalent algorithm mentioned above, the number of OTSHT basis images is limited for reducing the computational load. The similarity using the first to K-th OTSHT coefficients are calculated at a time. The number K is determined by users. Algorithm 2 shows the pseudo code for non FS-equivalent algorithm using OTSHT. Algorithm 2: non FS-equivalent BM. Input: template t of size N × N and image x 1: make basis images 2: make the integral image of x 3: for k = 1: K 4: set the k-th OTSHT coefficient of x t to X t (k) 5: for each patch x j in x 6: set the k-th OTSHT coefficient of x j to X j (k) 7: end 8: end 9: estimated window j =min j ( ||X K j − X K t || 2 2 ) Output: estimated window Figure 7a,b shows the number of additions per pixel and the number of memory fetch operations per pixel, respectively, for computing the OTSHT coefficients using strip sum technique [8] (referred to as (S)), and integral image only (referred to as (I)). Compared to the number of operations for OHT (i.e., N = 8 or N = 16), the number of additions and memory fetch operations for B-OTSHT coefficients does not gain much, while that for the L-OTSHT is more than double and increases as N increases. With regard to memory usage, when the width and height of an image are J 1 and J 2 , respectively, and r rectangles having N h different heights in OTSHT basis images, J 1 J 2 N h memory size will be required for the horizontal strip sum technique [8], J 1 J 2 memory for the integral image, and J 1 J 2 memory for saving the similarity. Therefore, N h time more memory size is required for the strip sum technique. Table 1 summarizes the number of rectangles with different sizes having different heights and different widths in the set of N 2 OTSHT. Experimental Section In experiments, the fast BM algorithm using OHT and the fast BM algorithm using OTSHT are simply denoted by OHT and OTSHT, respectively, unless otherwise specified. We evaluate OTSHT in comparison to OHT and FS. All experiments are implemented using MATLAB and performed on Macintosh with 4.0 GHz core i7. Eight test images [23] were used for the evaluation. Pruning Performance of Different Tree Structures We evaluated the tree structures for the OTSHT basis images. We consider five examples of binary tree having N = 9 leaves shown in Figure 8. Table 2 summarizes the number of rectangles of different sizes having different heights. From the trees shown in Figure 8a-e, we construct the OTSHT basis images, which are referred to as B-OTSHT, OTSHT(1), OTSHT(2), LR-OTSHT, and LL-OTSHT, respectively. Figure 9 shows the percentage of remaining windows after pruning, which was conducted every k-th basis image. The number is averaged over 100 templates. In this experiment, the performance of OTSHT(1) is only slightly better than that of B-OTSHT and OTSHT (2). On the other hand, the performances of LR-OTSHT and LL-OTSHT were not satisfactory. FS Equivalent Algorithm We performed OTSHT, OHT and FS for evaluating the processing time. The template size, N × N, was changed from N = 5 to 15. 100 templates were chosen every 55 pixels in the raster scan. Balanced binary tree is used for constructing OTSHT basis images. All of the results are identical to FS. Figure 10 shows the mean processing time. The processing time of FS increases linearly as N increases, while the plot of OTSHT is flat. The OTSHT is faster than FS when N is greater than or equal to 7. The OHT is just a bit faster than OTSHT but the size of it is limited to be power-of-two. In [8], the time speed-up of algorithms over FS was examined reporting the speed up of OHT over FS to be roughly 10 times faster when N = 16. In our experiment, the speed up of OHT over FS was 6 times faster due to the fact that we do not use the particular template used in [8] showing high standard deviation of pixel intensities in the template. Image Denoising Application We have compared the OTSHT to FS and the 8 × 8 OHT [8] within the framework of image denoising, where the denoising performance depends on collecting similar patches. For this purpose, as an image denoising method, the weighted nuclear norm minimization (WNNM) [16] has been used. In WNNM, the optimal patch size and other parameters are set depending on noise level, which are shown in Table 3. Noise added to the image was white Gaussian with zero mean and the standard deviation of σ, where σ = 10, 20, 30, 40, and 50. The OTSHT(OHT) and FS are used as the procedure of collecting similar patches in WNNM, which are referred to as WNNM-K and WNNM-FS, respectively. The pseudo code is shown in Algorithm 3. From the observation in Sections 4.1 and 4.2, we constructed the OTSHT basis images from the balanced binary tree and used the non FS-equivalent algorithm where K = 2, 4, 8, and 16 described in Section 3.2 because the speed up over FS cannot be expected when the patch size is small. 5: BM for collecting similar patches to form similar patch groupỹ t by SSD j end 10: aggregatex t to form the clean imagex (i) 11: end Output: clean imagex (max i ) First, we compare the OTSHT to FS. Figure 11a shows the mean PSNR of WNNM-FS and WNNM-K, K = 2, 4, 8, and 16 in different noise levels. The PSNR of WNNM-2 was below that of WNNM-FS but almost the same when the noise level is low. When K ≥ 4, the PSNR of WNNM-K is almost the same as that of WNNM-FS. The PSNR of WNNM-16 is slightly higher than that of WNNM-FS. The PSNR of each image is shown in Table 4. The best PSNR is shown in bold. We observe that there is almost no difference of PSNR between WNNM-K and WNNM-FS, often the results of the first are even better, although FS is generally considered as more accurate than non-FS equivalent algorithm. The reason behind this is that BM in the spatial domain is not efficient for noisy images since it may result in matching noisy patterns, and thus, deceasing the denoising performance. The filtered images by these methods are almost undistinguishable. Figure 11b shows the mean processing time in different noise levels. The number of y-axis in the bar chart is the number of limited basis images, K. The processing time for BM and the other denoising method's modules are expressed in blue and yellow bars, respectively. The processing time for BM of WNNM-2 is 46 to 56 percent of that of WNNM-FS; WNNM-4, 53 to 63%; and WNNM-8, 62 to 75%. In addition, the larger is the patch size, the more efficient is the procedure. The OTSHT reduces the processing time while keeping the same PSNR level as FS. Next, we compare the OTSHT to the 8 × 8 OHT used in WNNM-K (K = 2, 4, 8, and 16). Although the OHT cannot be used in WNNM with the optimal patch size, we force the 8 × 8 OHT to WNNM by fixing the patch size to 8 × 8 for evaluating the performance in the different patch sizes. Figure 12 shows the PSNR and processing time of the OTSHT and the 8 × 8 OHT. The number of y-axis in the bar chart is the number of limited basis images, K, in the processing time. The processing time for BM and the other denoising method's modules are expressed in blue and green bars, respectively. We observe the mean PSNRs of the OTSHT are larger than those of the 8 × 8 OHT and the other processing time of the OTSHT is approximately 50 seconds faster than that of the 8 × 8 OHT. This is due to the fact that the patches collected by the 8 × 8 OHT contain extra regions that are not appropriate for collecting similar patches and for processing in other modules. The PSNRs of each image are shown in Table 4, where the best PSNR is shown in bold. When σ = 10 and 20, in WNNM-2 and WNNM-4, the PSNRs of OTSHT were 0.33 to 0.35 higher than those of OHT; WNNM-8, 0.28 to 0.29. When σ > 30, the PSNRs of OTSHT were almost the same as those of OHT in WNNM-2, 4, 8, and 16. Conclusions We have considered the fast block matching (BM) based on orthonormal tree-structured Haar transform (OTSHT). We have described how to construct the two-dimensional OTSHT and use them for BM with a freedom of the design. The OTSHT can be used for FS-equivalent BM and non FS-equivalent one. In FS-equivalent BM, the conventional techniques, such as pruning and strip sum via integral image, are used for speed up. In non FS-equivalent BM, limited basis images are used. As a FS-equivalent BM, we have evaluated the computational complexity and pruning performance on the design of tree structures. We have demonstrated that the OTSHT based on balanced binary tree is more efficient than that based on the logarithmic binary tree, with respect to pruning performance and computational cost. As a non FS-equivalent BM, we have demonstrated the capability of the introduced method in image denoising application, where an arbitrary template size is used, depending on a noise level. In all our experiments, we have observed that not only PSNR values but also visual appearance of denoised images by WNNM-K and WNNM-FS are extremely close, so we can conclude that filtered images by these methods are almost indistinguishable. Thus, to conclude, the main advantage of the proposed WNNM-K is that it can effectively substitute a baseline WNNM (where FS is used for BM) providing a significant reduction of its computational time.
5,419.4
2018-11-07T00:00:00.000
[ "Computer Science" ]
Nanovaccine Delivery Approaches and Advanced Delivery Systems for the Prevention of Viral Infections: From Development to Clinical Application Viral infections causing pandemics and chronic diseases are the main culprits implicated in devastating global clinical and socioeconomic impacts, as clearly manifested during the current COVID-19 pandemic. Immunoprophylaxis via mass immunisation with vaccines has been shown to be an efficient strategy to control such viral infections, with the successful and recently accelerated development of different types of vaccines, thanks to the advanced biotechnological techniques involved in the upstream and downstream processing of these products. However, there is still much work to be done for the improvement of efficacy and safety when it comes to the choice of delivery systems, formulations, dosage form and route of administration, which are not only crucial for immunisation effectiveness, but also for vaccine stability, dose frequency, patient convenience and logistics for mass immunisation. In this review, we discuss the main vaccine delivery systems and associated challenges, as well as the recent success in developing nanomaterials-based and advanced delivery systems to tackle these challenges. Manufacturing and regulatory requirements for the development of these systems for successful clinical and marketing authorisation were also considered. Here, we comprehensively review nanovaccines from development to clinical application, which will be relevant to vaccine developers, regulators, and clinicians. Introduction Viral infections can have substantial negative clinical and socioeconomic impact globally [1][2][3]. Recently, the COVID-19 pandemic (caused by SARS-CoV-2 virus) brought about devastating clinical effects, with more than 186 million confirmed cases globally and 5 million deaths reported by the WHO so far [4]. The socioeconomic impact was as bad: it is estimated that losses to the global economy amounted to £8 trillion over 2020-21 (with a global GDP loss of 6.7%), and will reach £22 trillion over 2020- 25 [5,6]. It is also estimated that seasonal influenza can cause 250,000-500,000 annual deaths worldwide [1]. Chronic diseases caused by viral infections can be just as impactful; for instance, the number of known HIV infections since AIDS was first diagnosed is 77.5 million, with 37.7 million total deaths and 1.5 million new infections in 2020 [7]. Another example is hepatitis C, of which 71 million infections were reported worldwide, with more than Finally, another major challenge in vaccine delivery is related to antigen targeting of relevant tissues such as lymph nodes (LN) and other lymphatic tissues, with abundant immune cell populations [39]. Targeting depends not only on the administration route, but also on the physicochemical properties of the selected antigen delivery system. In particular, particle size, surface charge and chemical modifications of the carriers' surface have been shown to significantly influence their trafficking from the administration site to the relevant tissues and their interaction with relevant immune cell populations, particularly APCs [40][41][42]. Therefore, it is crucial that these aspects are considered in the development of novel vaccines against infectious viruses. Routes of Vaccines Administration Vaccines have traditionally been delivered through parenteral routes, particularly through the intramuscular (IM) and subcutaneous (SC) routes ( Figure 1). In the majority of cases, this allows the formation of a local depot at the injection site, from where the antigen and adjuvants (if present) can be drained to the local LN. This process can happen passively or through immune cell capture and transport to the LN, depending on the characteristics of the vaccine formulation. Transdermal (TD) and SC injections have been presented as alternatives to IM immunisation precisely due to the advantages they may present in terms of vaccine drainage to the lymphatics and overall immunogenicity [43]. However, the presence of adjuvants in vaccine formulations has occasionally led to increased levels of local side effects observed following SC and TD administration, maintaining the preference for IM injections over the other parenteral alternatives ( Figure 1) [44]. Despite their widespread use, parenteral routes of immunisation present significant limitations, namely, in terms of the financial and human resources required for preparing, administering, and disposing of injectable materials, and the risks associated with needle-stick injuries and sharp waste management. For these reasons, mucosal routes of administration have also been explored for vaccines, particularly considering the funda-Pharmaceutics 2021, 13,2091 5 of 48 mental role played by the mucosal-associated lymphatic tissue (MALT) in eliciting mucosal immunity at a local level [45]. The oral and nasal routes have been most widely studied in this regard (Figure 1), with a focus on the development of local immune response following antigen presentation by APCs such as macrophages and dendritic cells (DCs) to tissue-resident T and B cells. This type of immune response is particularly important, as evidenced through the secretion of antigen-specific immunoglobulin A (IgA) antibodies which are capable of recognising the antigen at the entry site, and which therefore prevent the further spread of the pathogen in the body [46,47]. This level of protection is often difficult to achieve with parenteral immunisation strategies, so it is particularly important for the scientific community to focus efforts on the development of formulations for mucosal vaccine administration. Regarding the oral route, vaccine developers must consider the harsh gastrointestinal (GI) environment when designing a formulation ( Figure 1). Antigen protection against low gastric pH, high enzymatic presence and significant mucus layer throughout the tract is essential, particularly when developing modern vaccines with peptide, protein or nucleic acid antigens [48]. Oral vaccination has proved to be a successful approach when direct delivery to the site of infection in the GI tract may have greater impact on eliciting immune responses where required. For instance, the success of the oral polio vaccine (OPV) in reducing infection and the transmission of polio has been attributed to local immune responses in the intestinal mucosa, where poliovirus replicates [49]. An alternative to the GI obstacles would be to focus on the intranasal (IN) route of administration, which also presents great advantages in terms of vaccine delivery, including high vascularisation of the nasal mucosa and rapid absorption of antigens to the systemic circulation ( Figure 1) [50]. Also, many pathogens enter through this route to induce lifethreatening respiratory diseases, making this region even more important and attractive in terms of developing vaccines against these infectious agents. For instance, the IN delivery of a coronavirus vaccine may provide stronger mucosal immunity in the nose and lungs, offering protection at the site of entry [51]. Indeed, a chimpanzee adenovirus vectored SARS-CoV-2 vaccine elicited strong humoral and cellular responses in the nasal mucosa in a mouse model [52]. Several clinical trials for IN vaccines are now underway, which will reveal if these responses also translate to humans. Practical advantages of a nasal spray vaccine as a less invasive method of administration have also been shown previously with the influenza FluMist vaccine. Nevertheless, the limitations of the IN administration route, including the rapid mucus clearance leading to a short residence time of the antigens in the nasal mucosa and the size-restricted permeation of antigens and adjuvants across the epithelial barrier, should also be considered when developing vaccine formulations [50]. There is also a pressing need for new adjuvants that can be safely administered through mucosal routes (as opposed to alum, which cannot be used in these approaches) to improve the immune response generated against recombinant subunit antigens. It is worth mentioning the recent interest in the transdermal (TD) route for vaccination purposes (Figure 1). The skin is the largest human organ, with an extensive immune cell population and close access to the bloodstream and lymphatic system, evidencing an enormous potential for targeted vaccine delivery. Given the particular structure and composition of the skin, with its external stratum corneum being practically impermeable to drugs and antigens, the main challenges in developing vaccine formulations for the TD route are related to overcoming this penetration issue. Over the years, researchers have taken various approaches to tackle this challenge, mostly through the use of penetration enhancers or physical processes to disrupt this barrier and allow drug and antigen delivery. Microneedle (MN) arrays, i.e., patches containing a variable number of needle-like projections in various shapes and dimensions (generally below 1 mm in height), have gained attention in the last few decades, also for immunisation purposes (Figure 1) [32,53]. These structures provide a painless alternative to vaccine TD delivery, allowing the interaction of antigens and adjuvants with the dermal immune cell population, and facilitating their access to draining lymphatics, potentially generating local and systemic immune responses through this route. Delivery Systems of Vaccines The appropriate design of a delivery system for vaccines is as crucial as the choice of a pertinent administration route for enhancing immune responses. A well-designed delivery system can significantly improve the bioavailability of viral antigens by ameliorating cellular uptake, conferring metabolic stability and targeting relevant tissues. In this section, various vaccine delivery systems will be discussed, highlighting the pros and cons of each system, with a focus on the recently introduced nanocarriers and new delivery technologies. Viral Vectors Although the idea of using viral vectors for delivering vaccines is not a recent one, the first recombinant viral vector vaccine, developed against Ebola virus, was only approved for medical use in Europe and the US in 2019 [54,55]. The first demonstration of a viral vectored vaccine in the early 1980s was a recombinant vaccinia virus (VACV) expressing the hepatitis B surface antigen (HBsAg), which was shown to induce protective immune responses against hepatitis B virus in a nonhuman primate model [56,57]. The technology relies on viral vectors encoding for pathogen antigens being delivered to the host, after which the antigens are expressed and an immune response is mounted against the target pathogen ( Figure 2). Viral vectors can be either replication-competent or nonreplicating, although the latter generally elicit weaker immune responses. The greatest advantage of viral vectors is their high immunogenicity; however, concerns about the safety of replication competent vectors has hindered their rapid development. Recently, newer generation single-cycle vectors that amplify antigen genes without the risk of infection are being investigated [58]. Adenoviruses Adenoviruses are one of the most common vectors used and in trials for vaccine delivery. Belonging to the Adenoviridae family of viruses, they are nonenveloped doublestranded DNA viruses with genomes of approximately 30-40 kb in length ( Figure 2). Adenoviruses are widespread across the animal kingdom, and currently there are over 80 human adenoviruses (HAdVs) types. They are categorised into seven species, A to G, with species C serotype 5 (Ad5) being most highly prevalent [66]. The tropism of the virus is determined by the targeted cell host receptor, and the numerous types allow for a broad tropism. For instance, species C HAd5 and HAd2 bind to the coxsackie adenovirus receptor (CAR), expressed on endothelial and epithelial cells [67]; B species HAd35 binds to CD46, ubiquitous to many cells [68]; and B1 species HAd3 binds to CD80/CD86 expressed on APCs [69]. In addition, tropism can be altered by modification of the capsid to create chimeric Ad viruses [70]. The vector can be replication-competent or replication-defective, the latter typically by the removal of early transcript 1A (E1A) and E1B, both which are required for replication. In addition, E3 is often deleted as it is not required for replication in Pharmaceutics 2021, 13, 2091 7 of 48 cell culture, and deletion of E4 prevents leaky expression of the early genes [71]. HAdVs are produced at high titres in mammalian cell culture, with E1 proteins provided in trans [72]. Although the vector has a relatively small insert size of 7.5 kb, minimal adenovirus 'gutless' vectors, with most viral genes removed, have also been developed to allow insertions of foreign sequences of up to 38 kb [73]. The viral genome is episomal, but there is some risk of integration as viral replication and transcription occurs in the nucleus of the host cell. Numerous viruses are currently undergoing clinical and preclinical trials as vectors for vaccines including adenoviruses, poxviruses (e.g., Modified vaccinia Ankara, MVA; horsepox virus), lentiviruses (e.g., human immunodeficiency virus, HIV), rhabdoviruses (e.g., vesicular stomatitis virus, VSV; rabies virus), paramyxoviruses (e.g., measles virus, Newcastle disease virus, Sendai virus), flaviviruses (e.g., Yellow Fever virus), and herpesviruses (e.g., cytomegalovirus, CMV). There are currently six viral vector vaccines licensed globally, including four against COVID-19 (adenovirus vectors in Oxford-Astra- Much work in mouse models has shown that HAdVs elicit potent antibody and T cell immune responses [70]; however, the serotype contributes to slight variations in the phenotype and functional properties of memory T cells elicited by the vector. Innate immune responses including production of pro-inflammatory cytokines and activation of complement has also been reported [74]. However, the ubiquitous nature of Ad5 in humans leads to the attenuation of immune responses due to pre-existing immunity, demonstrated in Ad5 vectored HIV vaccine trials [75,76]. A single dose of an Ad5 vectored vaccine against COVID-19 showed a dose-dependent production of neutralising antibodies, and specific T cell responses. However, in keeping with previous observations, in individuals with a high concentration of Ad5-specific antibodies, T cell responses were attenuated, particularly at lower doses of the vaccine [77]. Therefore, adenovirus types that are rarer such as HAd26 and HAd35 have been developed as vectors to combat this. A preclinical study in mice showed that the HAd26 vectored COVID-19 vaccine induced strong antibody and T cell responses [78]. The replicationdeficient Ad26.COV2.S (Janssen, Beerse, Belgium) vaccine has similarly shown robust protection against symptomatic COVID-19 in human trials, with potent neutralising antibodies and induction of T cell responses against multiple SARS-CoV-2 variants of concern [79][80][81]. The immune response elicited seems to depend on the vector type used for the delivery of the vaccine. For instance, a Zika virus vaccine expressing the Zika proteins precursor membrane (prM) and envelope (E) via HAd4 or HAd5 vectors (Ad4-prM-E or Ad5-prM-E) both showed protection against disease in a mouse model. However, the Ad5-prM-E vaccine induced both humoral and T cell immunity, while the Ad4-prM-E elicited only T cell responses [82]. Indeed, administration of the Ad5-prM-E alongside a UV-inactivated HAd4 vector reduced the anti-Zika antibodies, suggesting that the HAd4 capsid could skew the immune profile towards T cell responses [83]. Nonhuman adenoviruses such as bovine adenoviruses (BAdV) and chimpanzee adenoviruses (ChAd) provide an alternative avenue to bypass pre-existing immunity; indeed, the ChAd hypervariable regions of the immunogenic capsid hexon protein were shown to be sufficiently different from HAd5 [84], and pre-existing HAdV antibodies did not cross-react with BAdV-3 vector [85]. BAdV vectors targeting influenza proteins elicited strong humoral and cell-mediated responses in preclinical small animal models [86,87]. Among the nonhuman adenoviruses, ChAd vectors have progressed the furthest in terms of use in humans. An Ebola vaccine delivered via a ChAd3 vector (ChAd3-EZO-Z) showed robust antibody and CD8 T cell responses in two small human trials, with no adverse effects [88,89]. The ChAdOx1 vector, derived from ChAdY25, has been successfully used in the Oxford-AstraZeneca COVID-19 vaccine, with 62% efficacy after two doses [90]. Both antispike neutralising antibodies and immune cell activation against SARS-CoV-2 were measured [91], and a study reported the activation of a diverse T cell receptor (TCR) repertoire against different areas of the spike protein suggesting a robust T cell response [92]. A better understanding of vector-induced immunity in simian and alternative HAd vectors is required for the development of new viral vaccines and to evaluate their use in repeated booster doses. Poxviruses Poxviruses are large, enveloped double-stranded DNA viruses, with genomes of approximately 190 kb, and a high capacity of 25 kb for transgene insertion (Figure 2). The most famous, vaccinia virus, used as live vaccine against smallpox, is highly effective at preventing disease. However, vaccinia was also associated with a range of adverse reactions, more so than most other vaccines [93]. The Modified vaccinia Ankara (MVA), a highly attenuated strain with approximately 15% of the genome deleted, has been investigated as a safer and effective vector against many viral diseases [94,95]. MVA is a nonreplicating vector that can be produced at high titres, and although it is generally well tolerated, high doses of vector caused some adverse effects [96]. MVA has been shown to induce potent humoral responses [97], and robust CD8 T cell responses comparable to other vaccinia strains [98]. As many viral genes that usually allow for host immune evasion are deleted in MVA, the virus shows enhanced antigen presentation and immunogenicity, leading to an increase in proinflammatory cytokines and improved migration of monocytes and leukocytes [99,100]. Removal of immunomodulatory genes in MVA can further improve immunogenicity. For instance, deletion of the IL-18 binding protein gene C12L, increased CD8 and CD4 T cell responses to vaccinia epitopes by up to three-fold, with greater protection against vaccinia infection in a mouse model [101]. Recently, the repair of two mutated or missing host range genes (C16L/B22R and C12L) were shown to restore replication of MVA in human cell lines, suggesting that MVA vaccines can also be engineered into replicating vectors [102], which could further improve immune responses. MVA vectors are being developed for influenza and other respiratory viruses, and protection against these viral infections has been demonstrated in preclinical animal models [94]. Additionally, a Phase I/II trial of MVA vectored vaccine targeting influenza HA (rMVA-HA) showed induction of neutralising antibodies and HA-specific T cell responses [103]. Another recombinant MVA vector targeting influenza nucleoprotein and matrix protein 1 (MVA-NP + M1) in individuals over 65 was deemed to be well-tolerated, although the trial was not sufficiently large to determine its efficacy [104,105]. An MVA vectored SARS-CoV spike vaccine elicited neutralising antibody responses in mouse, rabbit, and macaque models [106,107]. Similarly, high humoral responses were observed in mice administered an MVA vectored MERS-CoV spike vaccine [108], and complete seroconversion and MERS-CoV spike-specific T cell responses measured in at least 83% of individuals given the same vaccine in a small Phase 1 clinical trial [109]. Although no MVA vaccines against COVID-19 have yet entered human trials, several studies have shown to induce strong and specific cellular responses against SARS-CoV-2 spike in mice [110][111][112]. An MVA vectored vaccine expressing prefusion stabilised SARS-CoV-2 spike induced robust antibody and CD8 T cell responses and offered protection from lung infection in a macaque model [113]. The development of MVA vectored vaccines soon after the emergence of SARS-CoV, MERS, and SARS-CoV-2 suggests that this platform can be used for rapid response against emerging viruses. Vesicular Stomatitis Virus Vesicular stomatitis virus (VSV) is an enveloped single-stranded negative sense RNA virus belonging to the Rhabdoviridae family ( Figure 2). The development of a reverse genetics system in 1995 allowed for the virus to be grown to high titres as well as engineer recombinant VSV (rVSV) to express foreign sequences [114]. The genome size is approximately 11 kb, with a relatively small insert size of 5 kb. It is typically used as an attenuated vector, achieved by deletion of the viral glycoprotein G, mutating the viral matrix protein M, or rearranging the order of viral proteins or insertion of exogenous proteins [115]. The glycoprotein G determines the tropism of the virus, which can be altered by replacing with a transgene. VSV infects livestock, but rarely humans. This implies a low risk of pre-existing immunity; however, antivector immunity was detected in one-third of individuals given the vector, which may cause issues in situations where multiple doses or multiple VSV vaccines are administered [116]. Interestingly, replacing the G protein with a glycoprotein of lymphocytic choriomeningitis (VSV-GP) in a vector expressing ovalbumin (OVA) showed lower neutralising antibody titres compared to a standard VSV-G-OVA vector in mice, with no loss of efficacy upon booster doses [117]. This suggests that altering the vector can help overcome vector-specific immunity. In addition, there have been some concerns of safety due to risk of neurovirulence observed in animal models. For instance, mice infected intranasally with wild-type VSV showed CNS infection via infection of the olfactory neurons [118,119]. However, no neurovirulence was observed in macaques infected intranasally with rVSV, suggesting that no vector-associated pathogenesis occurs in nonhuman primate models [120]. One of the earliest preclinical studies in the 1990s showed that a rVSV vectored influenza vaccine elicited high levels of neutralising antibodies in mice [121]. The first human clinical trial was undertaken nearly two decades later with a rVSV vectored HIV vaccine, in which all vaccinated individuals developed HIV-specific antibodies after two doses, and HIV gag protein-specific T cell responses were detected in more than half of the individuals in the highest dose group [122]. There is currently one licensed rVSV vectored vaccine against Ebola (rVSV-ZEBOV). In a Phase 3 clinical trial in Guinea during the Ebola outbreak in 2014-15, rVSV-ZEBOV showed good efficacy by employing the ring vaccination strategy [123]. The vaccine induced robust humoral responses, while the highest dose also elicited modest T cell responses [124]. An rVSV vectored MERS-CoV spike vaccine showed rapid and potent neutralising antibody responses in a macaque model, although antibody titres declined by six weeks postvaccination [125]. An rVSV vaccine expressing SARS-CoV-2 spike protected against SARS-CoV-2 challenge in a hamster model and reduced viral titre in the lungs and upper respiratory tract [126]. Similarly, a replication competent VSV-SARS-CoV-2 vaccine expressing modified spike protein also showed protection against lung infection in mice, with a high titre of neutralising antibodies. Indeed, these serum antibodies were protective against disease in nonvaccinated mice [127]. VSV vectors have generally been shown to induce strong neutralising antibody responses, but lower CD8 and CD4 T cell immunity [59]; however, whether this is sufficient for protective immunity still needs to be determined. Nonviral Vectors As the main aim of a vaccine is to be immunogenic, preferably at low dose and dosing frequency, it is important for a vaccine delivery system to present the viral antigen in an effective and sustained manner to trigger the desired immune response. In essence, nonviral vectors offer a great platform for the development of such effective vaccine delivery systems. Safety and efficacy, protection of antigen from degradation, and potential to act as adjuvants are some of the advantages nonviral vectors present for vaccine delivery [128]. In the last few decades, nanocarriers have been explored as nonviral vectors and as alternatives to conventional vaccines against infectious diseases [129][130][131][132][133][134]. For instance, polymeric and inorganic nanoparticles, dendrimers, liposomes and most recently virosomes, have been used for sustained delivery of viral antigens and adjuvants, protecting viral proteins against degradation, targeting host cells, and promoting the stimulation of immune cells ( Figure 3). Beyond their ability as vaccine delivery vehicles, the nanoscale size and ability to target APCs and stimulate different immune cells depending on the biomaterials used in their composition make nonviral vectors suitable as adjuvants, antigenicity enhancers and immunity boosters [135]. The biological properties of nanocarriers, and thus, their interaction with immune cells, is influenced by their physicochemical characteristics including particle size, shape, surface chemistry, hydrophobicity/hydrophilicity and steric effects of particle coating [136]. Engineering nanocarriers with respect to these properties is therefore crucial for their role as vaccine delivery vehicles and as potential vaccine adjuvants [137]. Various types of nonviral nanocarriers including polymeric, lipid-based and inorganic ones have been studied in this regard ( Figure 3). Other advanced delivery systems based on supramolecular hydrogels and microneedles have also been recently introduced as depot formulations for sustained and localised delivery, to enhance and prolong immune responses to vaccines, which are discussed under Section 4.4. Other Advanced Vaccine Delivery Systems. to act as adjuvants are some of the advantages nonviral vectors present for vaccine delivery [128]. In the last few decades, nanocarriers have been explored as nonviral vectors and as alternatives to conventional vaccines against infectious diseases [129][130][131][132][133][134]. For instance, polymeric and inorganic nanoparticles, dendrimers, liposomes and most recently virosomes, have been used for sustained delivery of viral antigens and adjuvants, protecting viral proteins against degradation, targeting host cells, and promoting the stimulation of immune cells (Figure 3). Beyond their ability as vaccine delivery vehicles, the nanoscale size and ability to target APCs and stimulate different immune cells depending on the biomaterials used in their composition make nonviral vectors suitable as adjuvants, antigenicity enhancers and immunity boosters [135]. The biological properties of nanocarriers, and thus, their interaction with immune cells, is influenced by their physicochemical characteristics including particle size, shape, surface chemistry, hydrophobicity/hydrophilicity and steric effects of particle coating [136]. Engineering nanocarriers with respect to these properties is therefore crucial for their role as vaccine delivery vehicles and as potential vaccine adjuvants [137]. Various types of nonviral nanocarriers including polymeric, lipid-based and inorganic ones have been studied in this regard ( Figure 3). Other advanced delivery systems based on supramolecular hydrogels and microneedles have also been recently introduced as depot formulations for sustained and localised delivery, to enhance and prolong immune responses to vaccines, which are discussed under Section 4.4. Other Advanced Vaccine Delivery Systems. Polymer-Based Systems Both natural and synthetic polymers have been widely used as drug delivery systems thanks to their physicochemical tunability, versatility of molecular design, biocompatibility and biodegradability, making them promising vehicles for the controlled and targeted delivery of antigens [138]. Antigens can either be encapsulated or adsorbed on the surface of polymers. The encapsulation of an antigen protects it from exposure to metabolic enzymes as well as the harsh GI environment, if the oral route is chosen for administration, whereas antigen adsorption avoids exposure to harmful organic solvents or extreme pH during the formulation process [139]. Over the years, polymeric systems such as nanoparticles, polyplexes, dendrimers and nanocapsules have been developed for delivery of vaccines against viruses. Polymeric Nanoparticles (NPs) Polymeric NPs have gained great attention for their applications as vaccine delivery vehicles due to their biocompatibility, biodegradability and ease of preparation [140]. According to the materials used in their composition, polymeric NPs can be divided into natural polymer NPs and synthetic polymer NPs [130,132]. Both types of NPs have been studied over the years as nonviral antigen carriers to deliver a wide range of antigens including hepatitis B virus (HBV) antigen [141,142], influenza virus [143], HIV, hepatitis C, dengue virus [131], and Ebola virus [144]. Chitosan, dextran, hyaluronic acid and beta-glucans are among the most commonly used natural polymers in the development of vaccine delivery systems [145]. These biomaterials are particularly attractive since many of them appear naturally in the structure of some microorganisms, making them easily recognisable by immune cells and therefore increasing the possibility of generating an immune response against the loaded antigen [146,147]. Chitosan is a naturally occurring cationic biopolymer which interacts with the negatively charged cellular membrane of the epithelium [148]. The adsorption of chitosan NPs with the nasal and intestinal mucosa is enhanced, significantly inducing immune responses against these nanocarriers. For example, Prego et al. developed chitosan NPs for the delivery of recombinant HBV antigen [141]. Researchers intramuscularly injected the NPs in mice and observed a 9-fold higher amount of HBV-specific Immunoglobulin G (IgG) antibodies than with the conventional aluminium-adsorbed vaccine. More recently, Cordeiro et al. developed carboxymethyl-β-glucan (CMβG)-chitosan nanoparticles for delivery of OVA as a model antigen [145]. In this study, a single vaccine dose subcutaneously injected in mice induced T cell proliferation and antibody responses comparable to those achieved with alum-adsorbed ovalbumin. Dacoba et al. reported the preparation of NPs by covalently bonding an HIV antigen, tethered via the protease cleavage site peptide PCS5, to chitosan or hyaluronic acid and further associating it with oppositely charged polymers such as dextran sulphate or chitosan and polyinosinic:polycytidylic acid (poly(I:C)) [149]. The results showed that all NPs systems elicited high anti-PCS5 antibodies and NPs containing PCS5 conjugated and poly(I:C) induced the strongest activation of antigen-presenting cells. El-Sissi et al. developed chitosan NPs loaded with Rift Valley Fever Virus (RVFV) inactivated antigen and studied the effect of this formulation in the vaccination of Swiss albino mice [150]. The results indicated that antigen-loaded chitosan NPs produced enhanced phagocytic activity of peritoneal macrophages and neutralised antibody levels against RVFV and IgG levels against RVFV nucleoprotein, in comparison with adjuvant-free RVFV inactivated antigen. These are a few examples among many in which NPs based on natural polymers have been used for the delivery of viral antigens, which have been extensively reviewed by other researchers [151][152][153][154]. NPs have also been developed for the delivery of antigenic viral components using synthetic polymers, the most investigated of which include poly (lactic-co-glycolic acid) (PLGA), poly(glycolic acid) (PGA), poly(lactide-co-glycolide) (PLG), poly(lactic acid) (PLA), poly(alkyl cyanoacrylate) (PACA) and polyanhydrides. The properties of polymers vary depending on their composition. For example, PLA produces dense, flexible structure that degrades slowly, while PGA is stiff but degrades rapidly. On the other hand, PLGA has properties in-between those shown by PLA and PGA. Thus, altering the composition or ratio of copolymers used during the NP synthesis process, can enable vaccine release and uptake control [132,155]. For instance, Thomas and coworkers studied mixed systems of synthetic polymers of PLA and PLGA, with various ratios of the two, as a delivery system for HBV surface antigen (HBsAg) through the pulmonary route [142]. The results showed that a higher presence of PLA produced NPs with larger size, which were taken up increasingly by rat alveolar macrophages, leading to an increase in cytokine secretion. In another study, Ross et al. showed that a recombinant H5 hemagglutinin trimer (H53) encapsulated into polyanhydride NPs induced high neutralising antibody titres and enhanced CD4 + T cell recall response in mice, inducing protective immunity against H5N1 influenza [143]. Rietscher and coworkers evaluated the potential of a vaccine delivery system made of hydrophilic polymer polyethylene glycol (PEG)-poly(allyl glycidyl ether) (PAGE)-b-PLGA (PPP) loaded with model antigen OVA [156]. In their in vitro studies, researchers observed a significant enhancement in T cell activation by APCs when antigen was delivered via PPP NPs in comparison with PLGA NPs or free OVA. Further, results showed that the subcutaneous application of PPP-OVA-NPs even without adjuvants induced potent CD8 T cell-mediated OVA-specific cytotoxicity in vivo, as compared to that caused by PLGA-OVA-NPs or OVA alone. Knight et al. demonstrated that a pH-responsive NPs vaccine loaded with OVA and CpG DNA adjuvant increased the magnitude and longevity of pulmonary CD8 + tissue-resident memory T cell response in mice [157]. The pH-responsiveness was given by a diblock copolymer made of hydrophilic copolymer of dimethylaminoethyl methacrylate (DMAEMA) and pyridyl disulphide ethyl methacrylate (PDSMA), and hydrophobic copolymer of propylacrylic acid (PAA), butyl methacrylate (BMA) and DMAEMA. Antigen-loaded NPs enhanced the activation of pulmonary APCs and assisted antigen persistence in the lungs. A single IN dose of the NPs vaccine provided protection against respiratory virus in both sublethal (vaccinia) and lethal (influenza) infection models for up to 9 weeks after immunisation. Polyplexes Polyplexes are complexes formed by polymers and nucleic acids [158]. Neutral, cationic and amphiphilic polymers have been used to produce polyplexes for gene delivery applications ( Figure 3A) [159]. Cationic polymers provide better delivery systems due to their easy electrostatic interactions with negatively charged oligonucleotides and cellular membranes. Synthetic cationic polymers such as PLA, poly-L-lysine (PLL), polyetherimide (PEI), poly(amidoamine) (PAMAM), poly(2-dimethylaminoethylmethacrylate) (PDMAEMA), as well as carbohydrate-based natural polymers such as chitosan, have been used to synthesise polyplexes. Moreover, strategies such as PEGylation and functionalisation with targeting ligands are employed to improve their transfection efficiency and circulation times [160]. In one study, Demoulins and coworkers used a polyplex made of PEI or histidylated PEI and a self-amplifying mRNA encoding influenza virus hemagglutinin and nucleocapsid [161]. The polyplex system successfully delivered the mRNA to DCs eliciting both humoral and cellular immune responses and improved the efficacy of mRNA vaccine. However, toxicity remains a challenge with the high molecular weight of PEI and thus alternative systems have been researched [162]. Li et al. studied the ability of two types of cyclodextrin (CD)-PEI polymers, prepared using different ratios of cyclodextrin to PEI, as intranasal mRNA vaccine carriers [163]. The conjugate CD-PEI nanocomplex delivery system was shown to traffic to lymph nodes with higher efficiency and to stimulate potent humoral and cellular immune responses. Polyplexes of polymers such as PDMAEMA have also been demonstrated to improve transfection efficiency of mRNA vaccines [164]. Recently, polyplexes formulations were also explored for challenging viral infections such as HIV-1 infection. Moyo and coworkers used the cationic polymer polyethyleneimine (PEI)-based self-amplifying mRNA vaccine encoding HIV-1 proteins to enhance cellular uptake of mRNA and induce potent T cell responses in BALB/c mice [165]. A single injection induced polyfunctional CD4 + and CD8 + T cell responses that lasted for at least 4 months postadministration and controlled HIV-1. Polymeric Dendrimers Dendrimers are highly branched, three-dimensional, star-shaped polymeric macromolecular structures ( Figure 3A). These are synthesised from a polyfunctional core, e.g., ammonia or ethylenediamine, which dictates the three-dimensional shape of the molecule. Repeat units, such as PAMAM, polyamino acids, polyphenyls, polyporphyrins and polyethers, are added to the core and react with its functional groups. Each layer of the repeat units thus produces branching and increases the number of surface functional units. In the final step, the dendrimer is capped with a layer that provides the desired surface chemical properties of the system. The interior layers are suitable for encapsulation of therapeutic or biomolecule while the exterior layer is made of functional groups which are useful for conjugation of these biomolecules and targeting moieties. Thus, by altering the nature of the core and repeating units, the number of layers, and the composition of the surface layer, it is possible to synthesise dendrimers with desired chemical and biological properties. Due to these unique characteristics, this class of polymeric nanomaterial has found applications in drug, gene and vaccine delivery [166]. Dendrimers exhibit efficient immune-stimulating properties, and thus can act as adjuvants and can enhance the efficiency of vaccines [167,168]. In a study, Asgary et al. synthesised a nonlinear globular G2 dendrimer comprising citric acid and polyethylene glycol 600 (PEG-600) and evaluated the adjuvanticity of NPs containing the rabies virus inactivated antigen in a mice model [169]. They observed that dendrimer-based formulations enhanced immune responses, induced high neutralising antibodies against rabies virus, and led to higher survival rate of mice. Chahal and coworkers found that a single dose of dendrimer encapsulated with multiple antigens was able to produce strong antibody and T-cell responses against Ebola virus, H1N1 influenza, and Toxoplasma gondii [170]. Polymeric Nanocapsules (NCs) Nanocapsules (NCs) are composed of an inner lipid core usually stabilised by phospholipids and an external polymeric shell ( Figure 3A). The main advantage of NCs is the opportunity to load hydrophobic adjuvant molecules in the core while antigens are displayed on the surface, associated with the external polymers through different types of interactions [171]. NCs coated with different polymers such as chitosan, inulin, protamine, polyarginine and beta-glucans, have been explored by Alonso and coworkers, with promising results [172][173][174]. In these studies, the authors demonstrated the potential to engineer these NCs for efficient lymphatic targeting, particularly through optimisation of particle size, surface charge and selection of different coating polymers. The results showed that small NCs (below 100 nm) with neutral or positive surface were able to drain efficiently to the closest lymph node following subcutaneous injection to the mice footpad. Additionally, protamine-coated NCs were able to efficiently deliver recombinant hepatitis B antigen to immune cells in mice, leading to a protective humoral response. Lipid-Based Systems A variety of lipid-based systems have been developed as antigen carriers, with particular focus on emulsions of micro-and nano-metric size. In fact, the first adjuvant approved for human vaccines after alum was MF59 ® , an emulsion of squalene oil, Tween ® 80 and Span ® 85 included in Fluad ® , a flu vaccine developed by Novartis [175]. Further research led to the development of other adjuvant emulsions such as AS04, approved for a human papilloma virus (HPV) vaccine, AS03, approved for use in Pandemrix ® during the 2009 H1N1 influenza pandemic until 2015 [176], as well as AS01 and AS02, used in a malaria vaccine that reached clinical development and recommended by WHO for children [177][178][179]. Owing to their excellent surface activity, biocompatibility and biodegradability characteristics, amphiphilic lipids were widely used to develop lipid-based systems such as liposomes, lipid nanoparticles and lipoplexes, which attracted researchers for their application in biomedicine including in vaccine delivery. Liposomes and Lipoplexes Liposomes were the first lipid-based nanocarrier platform to be developed for drug delivery, and one of the most explored vehicles in drug and antigen delivery [180][181][182][183]. Liposomes are self-assembled nanostructures, consisting of unilamellar or multilamellar vesicles composed of amphiphilic lipids and water ( Figure 3B) [184,185]. Like polymeric NPs, liposomes are also biocompatible and biodegradable. Moreover, they can incorporate hydrophobic agents within their lamellae and hydrophilic agents in their aqueous core, thanks to their amphiphilic nature. These features provide advantages for these systems as delivery vehicles for drugs, antigenic proteins and peptides. Additionally, particle size and surface charge of the liposome bilayer can be tuned and functionalised with ligands for targeted delivery applications [186,187]. Based on their surface charge, liposomes are divided into cationic, anionic, and neutral. Cationic liposomes are much more efficient than the other types, especially for sustained antigen release, since the positive charge enhances the interaction with the negatively charged cellular membranes [188]. There are several liposomal products that have gained marketing authorisation globally for the treatment of various diseases, including infections and cancer. In addition to delivering antigen, liposomes can act as adjuvants. Recently, a liposomal formulation containing monophosphoryl lipid A (MPLA) and the saponin QS-21 was approved as an adjuvant for a recombinant zoster vaccine [189]. Tokatlian et al. developed a delivery system consisting of synthetic liposomes with a gp140 trimer, BG505 MD39, covalently coupled on its surface, to study the effect of trimer density and vesicle stability on vaccine-induced humoral responses in mice [190]. They observed that immunisation with covalent MD39coupled liposomes, as compared to those with soluble MD39 trimers, led to increased antigen-specific T follicular helper cell responses and significantly higher MD39-specific IgG responses. Two vaccines for the prevention of herpes zoster are currently available, namely, Zostavax (ZVL) and Shingrix (herpes zoster subunit vaccine (HZ/su)). Herpes zoster, also known as shingles, is caused by the reactivation of the varicella-zoster virus (VZV), the same virus that causes varicella (chickenpox). Zostavax (ZVL) is a live, attenuated vaccine, whereas Shingrix ® (herpes zoster subunit vaccine (HZ/su) is an adjuvanted recombinant subunit vaccine [191,192]. ZVL was approved by the Food and Drug Administration (FDA) in May 2006 while HZ/su was approved in October 2017 for the prevention of herpes zoster in individuals 50 years of age and older. Shingrix ® is superior to Zostavax in both safety and efficacy, and is based on a liposome delivery system consisting of 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC)/cholesterol/monophosphoryl Lipid A (MPLA) alongside saponin Quillaja saponaria Molina fraction 21 (QS-21) as an adjuvant and varicella zoster virus (VZV) glycoprotein E (gE) as the antigen. Immunogenicity, efficacy, and safety data indicated HZ/su significantly reduced the risk of developing herpes zoster by more than 90% and thus use of the vaccine is recommended to all immunocompetent patients older than 50 years to prevent herpes zoster. In addition, as a subunit vaccine, it also showed good safety and efficacy in people with immunocompromising diseases, including HIV carriers [193]. Lipoplexes are also lipid-based carrier systems, which involve complexes formed by lipids and nucleic acids ( Figure 3B). Cationic lipids, such as 1,2-di-O-octadecenyl-3trimethylammonium-propane (DOTMA) and 1,2-dioleoyl-3-trimethylammonium-propane (DOTAP), and zwitterionic lipids, such as 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine (DOPE), have been used for mRNA vaccine delivery. Previous studies demonstrate that the physicochemical characteristics and biological activity of lipoplexes can be tuned by changing the lipid components, ratio of cationic lipid to mRNA, and ionic conditions [194,195]. Hattori and coworkers evaluated the efficiency of a lipoplex system consisting of mannosylated liposome/model antigen OVA-encoding pDNA (pCMV-OVA) for gene delivery to DCs [196]. Using in vitro study, they showed that the lipoplex could transfer pCMV-OVA more efficiently than cationic liposomes. Further in vivo study by the authors indicated that the mannosylated lipoplex systems provided enhanced OVA-specific cytotoxic T lymphocyte (CTL) activity than the conventional lipoplex or naked pCMV-OVA. Rhee et al. identified a B cell epitope peptide, from the HA protein of the H5N1 A/Vietnam/1203/2004 strain, which can potently induce production of epitope-specific antibodies. They reported that the immunization with a complex of B cell epitope of HA protein and Lipoplex(O), which is MB-ODN 4531(O), a natural phosphodiester bond CpG-DNA co-encapsulated in a phosphatidyl-b-oleoyl-c-palmitoyl ethanolamine (DOPE):cholesterol hemisuccinate (CHEMS) complex (1:1 ratio), completely protected the mice from the challenge by a lethal dose of recombinant H5N1 virus (rH5N1 virus) [197]. Lipoplexes are still at early stages of research and although promising, more work is required to understand the effect of lipid components and charge on the cellular delivery of nucleic acid-based antigens, and impact of this on immunisation effectiveness. Lipid Nanoparticles (LNPs) LNPs are generally composed of different types of lipids with different functions. Cationic lipids are usually added for mRNA complexation, while ionisable lipids can facilitate in vivo delivery and endosomal escape. Other components such as phospholipids, cholesterol and PEGylated lipids can also be added to contribute to improve NP properties such as stability, tolerability and biodistribution ( Figure 3B) [198][199][200]. Therefore, LNPs have gained interest for delivery of modern vaccines in recent years, particularly considering their potential for improved intracellular delivery [201]. Moderna's mRNA-1273 vaccine and Pfizer-BioNTech BNT162b2 vaccine, which have received Emergency Use Authorisation (EUA) by the MHRA in the UK, the EMA in the EU and the FDA in the US for use in adults to prevent coronavirus disease caused by SARS-CoV-2, are based on this type of NP [202]. In these products, LNPs are composed of an ionisable lipid for mRNA complexation and NPs assembly, a PEGylated lipid to increase NPs circulation time, cholesterol for increased stability and other phospholipids for structural support. In terms of the antigen, in both vaccines the LNPs encapsulate nucleoside-modified mRNA encoding for the spike (S) glycoprotein of SARS-CoV-2 virus [183,203]. This protein is a key component which mediates cell attachment and receptor recognition, allowing viruses to penetrate host cells and cause infection [204]. Phase 3 and 4 clinical trials for both mRNA vaccines have shown high safety, without any significant local or systemic toxicity. A twodose regime demonstrated that both vaccines are more than 94% effective in preventing serious illness [30,31,205]. It is worth noting that although the PEGylated lipid component is important for improving circulation time, it could be implicated in the allergic reactions observed in some people, and hence, similar mRNA vaccines developed in the future should replace PEG [206,207]. Inorganic Nanoparticles Inorganic NPs such as gold, iron oxide and silica have been widely explored as nanocarriers for vaccine delivery because of their low toxicity, biocompatibility and chemical stability ( Figure 3C) [132,212,213]. For instance, Chen et al. investigated a vaccine carrier system consisting of gold NPs (AuNPs) conjugated to a synthetic peptide resembling foot-and-mouth disease virus (FMDV) protein [214]. The developed NPs (pFMDV-AuNPs) were evaluated in BALB/c mice, with results showing a three-fold increase in antibody response compared to that of a control system of pFMDV-keyhole limpet hemocyanin (pFMDV-KLH) conjugate. Xu and coworkers prepared surface-engineered Au nanorods as a DNA vaccine adjuvant for HIV treatment, modifying the nanorods with three different molecules: cetyltrimethylammonium bromide (CTAB), poly(diallydimethylammonium chloride) (PDDAC), and polyethyleneimine (PEI) [215]. The results showed that PDDACand PEI-modified Au nanorods significantly enhanced cellular and humoral immunity as well as APC activation and T cell proliferation in vivo, in comparison with naked HIV-1 Env plasmid treatment. Niikura et al. examined the effect on immune response of spherical, rod, and cubic shaped Au NPs coated with West Nile virus envelope protein [216]. Researchers observed that rod-shaped NPs were more efficient in macrophage and DCs uptake than spherical or cubic-shaped NPs. Moreover, both spherical and cubic Au NPs induced higher level of inflammatory cytokines, like TNF-α, IL-6, IL-12, and GM-CSF, while rod-shaped Au NPs induced higher secretion of inflammasome-related cytokines, like IL-1β and IL-18. Tao and co-authors reported a system consisting of the extracellular domain of M2 membrane protein (M2e) immobilised on Au NPs and soluble CpG as an adjuvant. This formulation induced high levels of antibody response and provided complete protection against lethal H1N1 influenza virus in a mouse model [217]. In another study, Wang et al. conjugated recombinant trimetric influenza hemagglutinin on Au NPs, coupled with Toll-like receptor 5 (TLR5) agonist flagellin (FliC) as a particulate adjuvant system [218]. IN immunisation in mice with this formulation enhanced influenza-specific IgA and IgG levels and led to antigen-specific IFN-γ secreting CD4 + T cell proliferation as well as activated CD8 + T cells. Iron oxide nanoparticles (IONPs) are approved by the FDA for theranostic applications and have been investigated in detail in drug delivery, hyperthermia and magnetic resonance imaging (MRI) as contrast agents for imaging [219][220][221]. IONPs have also shown great potential as a vaccine platform against infectious diseases. Using intravenous route of administration into BALB/c mice, Shen et al. showed that systemic exposure to a single dose of iron oxide nanoparticles loaded with OVA led to subsequent antigen-specific immune reactions. Their studies reported serum production of antigen-specific antibodies lessened as demonstrated by reduction in the serum levels of OVA-specific immunoglobins, IgG1 and IgG2a [222]. A mannosylated nano-vaccine composed of IONPs loaded with HBsAg was more potent than commercial alum-based vaccines in the induction of cellular and humoral immune responses as indicated by studies by Rezaei et al. [223]. In another study, Rybka et al. used superparamagnetic IONPs (SPIONs) as the core of HBV capsid protein self-assembled VLPs, which could facilitate vaccine purification in manufacturing and enhance physicochemical stability [224]. Using two different SPION cores, functionalised with either dihexadecyl phosphate (DHP) or PEG, the authors observed a high efficiency of VLP assembly, particularly with SPION-DHP. However, this strategy also led to a noticeable decrease in antigenicity in comparison with the original antigen, namely at higher DHP and PEG concentrations, which requires further research. Silica nanoparticles hold great promise in drug and protein delivery because of their chemical stability, biocompatibility and low toxicity. Moreover, silica NPs can be synthesised in various sizes, shapes and pore diameters. Besides their physicochemical characteristics, these NPs can induce both humoral and cell-mediated immune responses, prompting researchers to investigate their potential as antigen carriers and adjuvants in vaccine delivery [213,225]. Guo et al. investigated the potential of hollow mesoporous silica nanoparticles (HMSNPs) as a vaccine delivery vehicle for Porcine Circovirus Type 2 ORF2 Protein [226]. Researchers studied in vitro uptake and release profiles of protein by HMSNPs, as well as the immune response elicited following IM administration of proteinloaded HMSNPs in female BALB/c mice. The results showed that protein loaded HMSNPs stimulated humoral and cellular immune responses and induced persistent immune responses due to the release kinetics of the HMSNPs. Braun et al. studied the loading and release of VIR-576, a derivative of the natural HIV-1 entry inhibitor targeting the viral gp41 fusion peptide, into/from mesoporous silica nanoparticles (MSNs) in vitro [227]. They demonstrated high peptide loading in the NPs which suggested promise of the formulation for local release applications. However, they recommended further work to be carried out to understand the release kinetics under biological conditions for better translation of in vitro results to in vivo conditions. N4 Pharma has developed Nuvec ® Si NPs coupled with polyethyleneimine for the delivery of DNA/RNA antigen into cells. In addition to their antigen delivery role, Si NPs have been reported to show adjuvant effect, generating T helper 1 (Th1) response and high cellular uptake [228]. Theobald suggested Nuvec ® as a nonviral vaccine delivery vehicle as a safe and effective alternative to lipid NP systems. It has also been explored as delivery system for SARS-CoV-2 vaccine [228]. Virosomes and Virus-Like Particles (VLPs) Virosomes and virus like particles (VLPs) have attracted the attention of researchers because of their structural and morphological similarity to infectious viruses, as well as their abilities to bind and penetrate the cell and to stimulate both humoral and cellular immunity. Virosomes are a special type of liposomes consisting of unilamellar mono and bilayered vesicles, to which virus-derived proteins may be attached or inserted [229]. VLPs are composed of a self-assembled viral membrane that forms a monomeric complex [230]. These are empty, multiprotein, nonreplicating and noninfectious structures. Because of the presence of a noninfectious subset of viral components in their structures, VLPs can be considered as a type of subunit vaccine. Both virosomes and VLPs are also safe and stable compared to viral vaccines and soluble antigens [231]. However, virosomes are preferred over VLPs in vaccine delivery. The protein-based structure restricts the movement of VLP while the fluidic phospholipid substrate of virosomes can enhance interactions with host cell receptors [232]. Due to the clinical success of these delivery platforms, several VLP and virosome vaccine products have received market authorization, e.g., for Hepatitis A virus (HAV), marketed as Epaxal ® [233,234], Hepatitis B virus (HBV), human papilloma virus (HPV) [232] and influenza (Inflexal ® ) [235]. Both Epaxal ® and Inflexal ® have been discontinued by Johnson & Johnson in 2011 [236]. Approved HBV vaccines invlude: (i) Heptavax-B, a hematogenous vaccine composed of hepatitis B surface antigen VLP; (ii) Recombivax HB, the first licensed VLP vaccine against HBV, developed by Merck; (iii) Engerix-B, developed by GlaxoSmithKline, which is composed of the viral small envelope protein HBsAg produced in Saccharomyces cerevisiae and presented as particles of around 20 nm size; and (iv) Sci-B-Vac which contains three HBV surface antigens (S, pre-S1 and pre-S2). Furthermore, currently there are three approved prophylactic HPV vaccines in the market, namely Gardasil ® , Gardasil-9 ® (a nonvalent HPV VLP vaccine), Cervarix ® and Cecolin ® , all based on L1 major capsid protein self-assembled into VLPs, leading to strong and specific anti-L1 immune responses [134,237]. Inflexal ® V is an example of a virosome-based approved vaccine, in this case, against trivalent influenza virus. This vaccine is formulated with an inactivated form of two A virus strains and one B virus strain with specific antigen hemagglutinin (HA) and neuraminidase (NA) [235]. This virosome consists of viral lipids, namely phosphatidylcholine, and HA and NA glycoprotein [238]. Inflexal ® V has demonstrated excellent humoral immune response against influenza in both adults and children. Epaxal ® is another clinically available virosomal vaccine, in this case against HAV [234]. Its virosome consists of phosphatidylcholine and phosphatidylethanolamine with viral envelope antigens, including HA and NA influenza proteins. The virosome surface is decorated with formalin-inactivated HAV which imparts adjuvant properties to the structure. The inactivated HAV triggers B cell proliferation, while the glycoproteins facilitate virosomal uptake by immunocompetent cells, eliciting both humoral and cell-mediated immunity. Bomsel et al. reported the preparation of virosomes containing HA and NA from inactivated H1N1 for the delivery of influenza enveloped viruses, which are used for virosome preparations with added HIV-1 virulence antigens such as recombinant gp41, p1 peptides and 3 m-052 adjuvants. The Gp41 antigen has been shown to aid host cell infection and evoke immune response, leading to full protection of immunised monkeys against vaginal challenge with simian HIV [239]. Virosomes have also been explored for SARS-CoV-2 antigen delivery. SARS-CoV-2 is an enveloped spherical particle with a club-shaped spike glycoprotein expressed on the surface [240]. Specific surface antigens and phospholipids of SARS-CoV2 can thus be used in virosome vaccine production. Hydrogels Supramolecular hydrogels represent an important class of soft biomaterials that has great potential in a wide variety of pharmaceutical and biomedical applications including vaccine delivery. Hydrogels are three dimensional networks of polymeric chains that can retain a large volume of water (>90%) and composed of either high molecular weight natural biopolymers such as polysaccharides and proteins or synthetic polymers and peptides [241,242]. The development of hydrogels with defined material properties can be achieved via molecular assembly of the carefully designed individual monomer molecules. The molecular building units undergo spontaneous molecular recognition and organisation into networks of ordered supramolecular structures with well-defined structural properties, which entangle through either physical or chemical cross-linking forming hydrogels ( Figure 4A-C). Physical gels are stabilised by a combination of noncovalent intra-and intermolecular interactions [243]. These interactions include hydrogen bonding, electrostatic, hydrophobic and aromatic interactions. On the other hand, chemical gels are stabilised by the formation of covalent bonds such as disulphide bonds (through oxidation of thiol groups), photo-or thermal-induced polymerisation, enzyme-catalysed cross-linking, or the reaction between thiols and acrylates or sulphones [244][245][246][247]. tural properties, which entangle through either physical or chemical cross-linking forming hydrogels ( Figure 4A-C). Physical gels are stabilised by a combination of noncovalent intra-and inter-molecular interactions [243]. These interactions include hydrogen bonding, electrostatic, hydrophobic and aromatic interactions. On the other hand, chemical gels are stabilised by the formation of covalent bonds such as disulphide bonds (through oxidation of thiol groups), photo-or thermal-induced polymerisation, enzyme-catalysed cross-linking, or the reaction between thiols and acrylates or sulphones [244][245][246][247]. Where natural or bioinspired building blocks are used to create the hydrogel network, the material becomes biocompatible and biodegradable implying low toxicity. Hydrogels are mucoadhesive, so can create localised (E) viral antigen depot postinjection/spraying, providing slow and controlled release of antigenic cargo leading to enhanced activation of APCs, improving both humoral and cellular immune responses over a prolonged period. Hydrogels have been used as (F) vehicles for various viral antigens and (G) could be functionalised with immune adjuvants as stimulatory agents to potentiate the immune response towards the delivered viral antigen. Peptide Hydrogels Recently, bioinspired peptide hydrogels have been studied for potential use as vehicles for viral vaccines, thanks to their inherent biocompatibility, biodegradability and mucoadhesion, as well as their viscoelastic and thixotropic properties, implying ease of administration either by injection or spraying, which can be used for both parenteral and mucosal immunisation, respectively ( Figure 4A,D) [243,[248][249][250]. Besides acting as a vehicle, self-assembling peptide nanofibres can be functionalised with immune adjuvants, such as immunogenic peptide sequences and protein antigens, to modulate immune responses against the corresponding infectious agent ( Figure 4F,G) [251,252]. The vaccination of animal models with peptide-based antigen-bearing β-sheet nanofibres resulted in strong immunogenic responses, with activation of both humoral and cellular immunity without the need for any other adjuvants [253]. Grenfell et al. used the synthetic β-sheet forming ionic self-complementary peptide (RADA)4, which underwent self-assembly into nanofibrous hydrogel matrix, as an in vivo depot for the sustained delivery of a recombinant HBsAg (rHBsAg) [253]. Slow-release kinetics of the antigen from hydrogel depot resulted in enhanced activation of APCs with improved humoral and cellular immune responses, leading to prolonged immunogenicity compared to adjuvanted antigens using alum and complete Freund's adjuvant [253]. Likewise, Friedrich et al. used the β-sheet-forming ionic self-complementary peptide (FKFEFKFE) or (KFE8) that selfassembles into a nanofibrous self-supporting hydrogels as an adjuvant in the development of vaccine against West Nile Virus (WNV) [254]. KFE8 peptide hydrogels emulsified with EIII, the putative receptor-binding domain of the viral envelope protein, were used for subcutaneous vaccination of mice. This system elicited enhanced antibody responses and significant protection against the lethal WNV infection in vivo when compared to EIII delivered with alum as an adjuvant [254]. Peptide hydrogels have also been used as delivery vehicles for viral DNA vaccines. For instance, Tian et al. used the short aromatic peptide Nap-GFFY-NMe, which undergo alkaline phosphatase-triggered self-assembly into nanofibrous hydrogels, for encapsulation and delivery of HIV DNA sequence encoding the gp145 envelope glycoprotein [255]. Enhanced humoral and cellular immune responses were achieved thanks to condensation of DNA by the left-handed structure of nanofibres and thus providing significant protection against degradation, proper transfection, and effective gene expression [255]. In a different study, Huang et al. rationally designed the self-assembling peptide sequence FLIVIGSIIGPGGDGPGGD or H9e, bio-inspired from both an elastic domain of spider silk and a transmembrane domain of the L-type calcium channel in human muscles [256]. This peptide showed hydrogel formation in presence of Ca + 2 salts, which was used as an adjuvant for the killed H1N1 influenza vaccine. The results showed improved immunogenicity compared to other commercial adjuvants including oil in water emulsions [256]. The H9e peptide was further evaluated by Li et al. for use as an adjuvant for the modified live vaccines (MLV) of porcine reproductive and respiratory syndrome virus (PRRSV) [257]. Pigs vaccinated with H9e-adjuvanted MLV showed enhanced humoral and cellular immunity governed by the higher number of T helper/memory cells and increased secretion of INF-γ, in comparison to H9e-free MLV [257]. Polymeric Hydrogels Along with peptide hydrogels, polymeric hydrogels have been also introduced as delivery systems for vaccine components ( Figure 4B). Roth et al. reported the use of polymer-nanoparticle (PNP) hydrogels as sustained-release delivery systems for both viral antigens and adjuvants to the immune system [258]. Aqueous solutions of both dodecyl-modified hydroxypropylmethylcellulose (HPMC-C12) and poly(ethylene glycol)b-poly(lactic acid) (PEG-PLA) PNPs were used in combination to form PNP hydrogels rapidly upon mixing. This delivery system was evaluated for immunomodulation using OVA protein and Poly(I:C) which is a toll-like receptor 3 agonist as a model antigen and an adjuvant, respectively. Compared to bolus administration of the same vaccine in standard phosphate buffer saline, the prolonged release of vaccine components from the hydrogel matrix showed enhanced humoral immunity with increased antigen-specific antibody affinity by 1000-folds [258]. More recently, Gale et al. reported the use of the aforementioned injectable (HPMC-C12)-(PEG-PLA) PNP hydrogel for the sustained delivery of vaccine cargo against SARS-CoV-2. The studied cargo was composed of the receptor-binding domain (RBD) of SARS-CoV-2 spike protein as the viral antigen and alum and CpG as adjuvants ( Figure 4F). Although RBD is poorly immunogenic even when used in combination with most common adjuvants, the sustained release of the subunit vaccine from the hydrogel matrix increased the RBD-specific antibody (IgG) titres in comparison to bolus administration [259]. With recent developments in the design of biohybrid materials responsive to clinically approved small molecules drugs, Gübeli et al. developed a biohybrid hydrogel as a depot system for controlled drug-induced release of HBsAg vaccines, which emerged as a potential replacement to the conventional repetitive vaccination strategy [260]. This system was based on eight-arm branched PEG polymer molecules functionalised with the protein Gyrase B (GyrB), where addition of coumermycin antibiotic induces dimerisation of GyrB, and hence, cross-linking of the polymer, leading to hydrogel formation and encapsulation of vaccine cargo within the gel matrix [260]. Orally administered novobiocin acted as a molecular switch to the hydrogel matrix by competitively replacing coumermycin, unlocking GyrB dimers, dissolving the hydrogel and releasing the vaccine. The novel depot system elicited a comparative immune response to that of the repetitive regime [260]. Thermo-responsive polymers that form hydrogels at body temperature also proved to be useful for formulation of vaccine depot gels. An example of this is the thermosensitive triblock copolymer hydrogel comprised of (PLGA-PEG-PLGA), which was utilised by Gao et al. to develop a DNA vaccine delivery system for the encapsulation of the recombinant hemagglutinin-neuraminidase plasmid of the avian Newcastle disease virus (NDV) [261]. This triblock copolymer undergoes postinjection hydrogelation triggered by host body temperature, leading to sustained release of the plasmid from the hydrogel matrix. The vaccine triggered strong immune responses, high efficacy, and complete protection against highly virulent strains of NDV [261]. Furthermore, thermo-responsive hydrogels generated from polysaccharides-based polymers have been also used for viral vaccines delivery. Wu et al. utilised the quaternised chitosan derivative N-[(2-hydroxy-3-trimethylammonium) propyl] chitosan chloride (HTCC) in combination with α, β-glycerophosphate (α, β-GP), HTCC/GP, for the development of a thermo-sensitive hydrogel intended for IN delivery of the avian influenza H5N1 virus split antigen [262]. At body temperature, the intranasally-administered system showed sol-gel transition, leading to prolongation of the antigen residence time in the nasal cavity. The enhanced humoral and cellular immune responses and the increased antigen-specific mucosal IgA titres elicited by the adjuvant-free H5N1 hydrogel vaccine, when compared to the naked H5N1 split antigen and MF59 adjuvant/antigen complex, were all attributed to prolonged release of antigen and disorganisation of ZO-1 protein of the nasal epithelium resulting in enhanced transepithelial transport of the antigen [262,263]. Moreover, this HTCC/GP thermosensitive hydrogel vaccine delivery system was previously used for IN vaccination with the adenovirus-based Zaire Ebola virus glycoprotein antigen (Ad-GPZ) [264]. Serum IgG, IgG1, and IgG2a and mucosal IgA antibodies had the highest titres in response to the hydrogel-based vaccine due to prolonged residence time of the antigen in the nasal cavity [264]. Microneedles In the pursuit of innovative administration routes for vaccines, the skin has emerged as an interesting alternative to conventional parenteral routes. This is mainly due to the extensiveness of this organ, and the easy access to immune cells, which abundantly populate the dermis. For this reason, achieving antigen and adjuvant delivery to this region has increased the number of possibilities of generating efficient local and systemic immune responses [265]. However, the external surface of the skin (the stratum corneum) is a very strong and impermeable barrier, making it extremely difficult for conventional drug and vaccine formulations to cross it and reach the dermis. This has led to the development of various strategies to disrupt this barrier and access the dermal space, including chemical and physical methods such as the use of penetration enhancing molecules, iontophoresis, electroporation and microneedle (MN) arrays [266]. This latter strategy has shown particular promise in vaccine delivery, with some prototypes achieving early stages of clinical development. Microneedle arrays are composed of tens to hundreds of needle-shaped projections, usually shorter than 1 mm, in various shapes and geometric arrangements. Over the years, these devices have been manufactured in a variety of materials including metals, glass, ceramics and polymers, through different methods including injection moulding, solvent casting, laser micromachining, drawing lithography and, more recently, three-dimensional (3D) printing [266,267]. The different types of MN arrays have traditionally been classified as solid, coated, hollow, dissolving and hydrogel-forming. Solid MNs were the first to be developed, aiming at a "poke and patch" approach in which drug permeation from a patch is improved by the transient pores created by the MNs in the skin [268]. From this concept, researchers developed coated and hollow MN arrays, aiming at improving the delivery efficacy of these devices. In the case of coated arrays, the drug or vaccine is directly coated onto a solid MN array, releasing within the skin upon insertion. On the other hand, hollow MN arrays mimic hypodermic needles through incorporating a channel within the needle shafts for delivery of liquids to the dermal space [269]. More recently, dissolving and hydrogel-forming polymer-based MN arrays have been developed from biodegradable polymers, allowing the achievement of self-disposable systems. This presents several advantages, particularly in terms of waste management and reduced risk of needle-stick injuries [270]. Moreover, this strategy also allows the delivery of increased doses of drugs and antigens, either incorporated in the needle shafts or as part of external reservoirs dissolved by the interstitial fluid absorbed upon MN array insertion [271]. Figure 5 summarises the three types of microneedle arrays developed and evaluated in the last few years for viral vaccination purposes, described in the following subsections. Coated MN Arrays Considering the low antigen doses commonly administered in vaccines, coated MN arrays were initially chosen for this application, with promising results obtained by the Prausnitz group in the scope of influenza immunisation. Using stainless steel MN arrays, the group successfully coated various inactivated influenza virus strains onto these devices, using carboxymethyl cellulose (CMC), Pluronic F-68 and trehalose as additional excipients [272][273][274][275][276]. In general, these studies showed that TD immunisation of mice led to strong humoral and cellular immune responses, providing protection against challenge, at least as effectively as IM immunisation with the same antigens. Effective viral clearance from mice lungs, as well as induction of memory responses, were also achieved with the developed coated MN array systems. The same coating strategy was later applied by the same group for the TD delivery of a plasmid DNA encoding hepatitis C virus nonstructural 3/4A protein [277]. In this study, MN-based immunisation effectively elicited specific cytotoxic T cell responses in mice, in similar levels to those generated following gene gunbased cutaneous administration. Stainless steel MN arrays were more recently used by Seok et al., who used a polyvinylpyrrolidone (PVP) coating solution containing trehalose to de- Figure 5. Schematic representation of the three main types of microneedle arrays developed for vaccine delivery. Coated microneedle arrays (A) are prepared using a solid array base, usually metallic, which is coated with a dissolving formulation containing the antigen(s) and adjuvant(s). Alternatively, dissolving formulations (B) can be used to manufacture the entire array, leading to vaccine delivery upon skin insertion of these self-disposable devices. Finally, sustained-release formulations (C) have been used to produce implantable microneedle tips that are left in the skin after insertion, upon removal of a separate baseplate. Coated MN Arrays Considering the low antigen doses commonly administered in vaccines, coated MN arrays were initially chosen for this application, with promising results obtained by the Prausnitz group in the scope of influenza immunisation. Using stainless steel MN arrays, the group successfully coated various inactivated influenza virus strains onto these devices, using carboxymethyl cellulose (CMC), Pluronic F-68 and trehalose as additional excipients [272][273][274][275][276]. In general, these studies showed that TD immunisation of mice led to strong humoral and cellular immune responses, providing protection against challenge, at least as effectively as IM immunisation with the same antigens. Effective viral clearance from mice lungs, as well as induction of memory responses, were also achieved with the developed coated MN array systems. The same coating strategy was later applied by the same group for the TD delivery of a plasmid DNA encoding hepatitis C virus nonstructural 3/4A protein [277]. In this study, MN-based immunisation effectively elicited specific cytotoxic T cell responses in mice, in similar levels to those generated following gene gun-based cutaneous administration. Stainless steel MN arrays were more recently used by Seok et al., who used a polyvinylpyrrolidone (PVP) coating solution containing trehalose to deliver polyplexes containing PLGA nanoparticles, polyethyleneimine and a DNA H1N1 influenza vaccine [278]. Despite achieving enhanced IgG-based immune responses with the polyplex-coated MN arrays in comparison with pDNA-coated ones, the expression level of exogenous genes was low, resulting in low immunogenicity of the vaccine prototype. On the other hand, the Kendall group developed a different coated MN device for vaccination purposes, obtaining similarly promising results in various viral vaccines. The device, named Nanopatch™, consists of a densely packed array of very short silicon MN (100 µm in length) and was successfully coated by the researchers with a commercial seasonal trivalent influenza vaccine (Fluvax ® 2008) [279]. Applying two devices to each mouse, the authors observed a 100-fold dose reduction in comparison with IM immunisation with the same vaccine, leading to high and long-lasting antibody responses. This approach was further expanded to other viral vaccines in different presentations, from HPV viruslike particles [280], to antigen-encoding DNA targeting West Nile virus, Chikungunya virus [281] and herpes simplex virus [282,283]. In further studies, the authors optimised the formulation to achieve higher antigen delivery [284], to include an adjuvant and achieve synergistic immune response improvements [285], and to assess vaccine kinetics to peak serum antibody levels in comparison with IM injection [286]. Another type of coated MN arrays for immunisation are those made of polylactic acid (PLA). Nguyen et al. described coating of these arrays with HBsAg in a CMC gel solution, with or without trehalose as a stabiliser [287]. Mice immunised with two doses of the MN prototype elicited higher antibody responses than those receiving the same antigen through IM route, with a Th2-biased response. Moreover, the inclusion of trehalose in the formulation increased antigen stability at 40 • C up to 7 days and after 10 freeze-thaw cycles. A similar strategy was recently described by Choi et al., i.e., coating a live smallpox vaccine in a PVA and trehalose solution to PLA MN arrays [288]. This approach not only provided improved stability to the vaccine in storage at −20 • C, but also led to increasing antibody titres up to 12 weeks postimmunisation. On the other hand, Uppu et al. described the application of coated PLA MN arrays in an immunisation strategy against dengue virus, through layer-by-layer coating with different polymers and adjuvants [289]. The results showed vaccine uptake by immune cells in both mice and human skin, with antigen release kinetically controlled by the degradation of the polymers used in the layer-by-layer coating. Finally, Jeong et al. proposed an innovative device with two semicircles of PLA MNs independently coated with two different influenza vaccines in a CMC and trehalose solution [290]. With this approach, the authors achieved equivalent immunisation efficacy to that of separately administered coated MN vaccines and a mixture of both vaccines coated onto a single MN array. Additionally, mice survival rate after viral challenge was equivalent or higher in the group immunised with the compartmental MN array in comparison with mice receiving the vaccine mix coated onto a single MN array. Dissolving MN Arrays Despite the success of coated MN-based approaches in immunisation, other strategies have been developed to overcome the limitations of potentially reusable devices, including the risk of needle-stick injuries and the need for appropriate disposal of the solid MN arrays after use. Polymeric MN arrays are particularly well-suited for this purpose, as they can be manufactured using biodegradable polymers, rendering self-disposable devices [270]. Dissolving MN arrays showed particular promise in the vaccine delivery field, allowing the incorporation of the vaccine antigen and adjuvant within the MN matrix, fabricated from fast dissolving polymers. Upon skin insertion, these MN arrays come in contact with interstitial fluid and quickly dissolve, releasing the vaccine in the epidermis and dermis where it can access the abundant resident immune cell populations. In the development of dissolving MN vaccine delivery systems, it is important to consider not only manufacturing and scale-up aspects, but also the various factors that affect the immunogenicity and efficacy of these approaches, including polymer selection, formulation pH, array geometry and needle density [291]. In 2010, the Prausnitz group reported for the first time the use of dissolving MN arrays for immunisation against influenza [292]. Here, the authors reported the fabrication of PVP MN arrays encapsulating a freeze-dried form of an inactivated influenza virus vaccine. These MNs dissolved quickly in mice skin, delivering more than 80% of the antigen in 15 min. Moreover, single-dose immunisation of mice with dissolving MN arrays induced strong humoral and cellular immune responses, in levels at least comparable to those achieved with IM immunisation and leading to effective protection against lethal viral challenge. In the same year, the Kendall group also published their first report of dissolving MNs in influenza vaccine delivery, using the previously described Nanopatch™ technology [293]. These multilayered MNs, composed of CMC and loaded with the commercial influenza vaccine Fluvax ® 2008, were able to elicit potent antibody responses which persisted in time, which is a sign of efficient memory induction. After these initial proof-of-concept studies, numerous other publications reported the development, manufacturing and evaluation of dissolving MNs for vaccination [270]. In terms of viral vaccines, influenza has been the main focus of attention. In 2012, Matsuo et al. reported the development of hyaluronan-based dissolving MN arrays for the delivery of various antigens including hemagglutinin specific to three influenza strains [294]. The results showed that transdermal immunisation of mice generated strong and long-lasting antibody responses, comparable to those achieved with IM injection and higher to the ones obtained by ID or IN immunisation, regardless of the presence of adjuvants in the formulations given through these other administration routes. Moreover, the MN-based immunisation strategy also provided protection against challenge, similar to that achieved through IM and IN vaccination. Similar results were obtained by Kommareddy et al., who used CMC-based MN arrays and monovalent H1N1 or trivalent influenza antigens [295]. The authors further demonstrated that a prime-boost TD immunisation regimen could generate antibody responses stronger than those obtained with IM injection. Research in this field continued with various authors demonstrating the efficacy of polymeric MN arrays for influenza immunisation, particularly using dextran [296], gelatine [297], polyvinyl alcohol (PVA) [298], hydroxyethyl starch [299] and CMC [300,301]. More recently, Vassilieva et al. additionally demonstrated the potential of dissolving MN arrays to codeliver influenza antigens and adjuvants such as Quil-A saponin or cGAMP, with promising results particularly for older populations [302]. Similarly, Wang et al. reported the efficacy of MNdelivered vaccine nanoparticles containing influenza matrix protein 2 (M2) ectodomain (M2e), neuraminidase and the adjuvant flagellin [303]. The results evidenced strong and protective humoral and cellular immune responses against homologous and heterosubtypic influenza viruses with this approach, paving the way to a universal influenza vaccine. Some of these approaches reached clinical development with promising results. Hirobe et al. developed MN arrays composed of hyaluronan, dextran and povidone to deliver trivalent hemagglutinin antigens transdermally [304]. The MN arrays were administered twice to healthy men (20 to 49 years old) and elicited an effective immune response at half the dose required for SC administration, without generating any noticeable systemic adverse effects. More recently, Rouphael et al. reported the results of a Phase 1 trial on the safety, immunogenicity and acceptability of gelatine MN arrays loaded with hemagglutinin antigens against three influenza strains (H1N1, H3N2 and B) [305]. In this work, TD immunisation with dissolving MN arrays led to similar antibody titres as those observed with IM immunisation, regardless of whether the MNs were applied by a healthcare professional or self-applied by the participants. Importantly, lower pain scores were reported by the participants in comparison with IM injection, and a general preference for TD vaccination was registered in this study. The authors also recently published an additional analysis of the results obtained in this study, particularly in terms of the mechanisms behind the immune response observed in the different study groups [306]. The results showed that hemagglutinin inhibition titres and antibody avidity were similar between TD and IM immunisation, despite the lower antigen dose in the MN array group. MNs also induced higher neuraminidase inhibition titres and T follicular helper cell levels, confirming an overall response that was at least equal to IM vaccination. Despite the main focus on influenza, other efforts have looked at dissolving MN arrays for vaccines against polio, measles, HIV, and other viruses. For example, the Prausnitz group expanded the evaluation of PVA-based MN arrays to the delivery of a rabies DNA vaccine for dogs [307] and of a Zika virus inactivated particle [308]. In both studies, immunisation with dissolving MN arrays elicited strong humoral and cellular immune responses, at least comparable to those obtained with IM injection, with low antigen doses. Moreover, in the case of the Zika virus vaccination, the MN-based approach led to cross-protection against different Zika virus strains and also dengue virus serotypes, efficiently controlling viral titres and inflammatory reactions. PVA-based MN arrays were also evaluated by Donadei et al. for the delivery of an inactivated polio vaccine, achieving high specific IgG responses with lower vaccine doses than those administered intramuscularly [309]. Similarly, Edens et al. reported the use of dissolving MN arrays for the delivery of inactivated polio vaccine [310] and a measles vaccine [311] to rhesus macaques. In these studies, results showed the induction of neutralising antibody titres against both viruses, comparable to conventional immunisation routes such as SC and IM. A combined approach against measles and rubella was also described by Joyce et al., who used CMC-based MN arrays [312]. In this case, TD immunisation elicited protective antibody titres against both viruses at higher levels than SC injection and protected the animals from viral challenge with wild-type measles. It is worth mentioning as well the results obtained by Zhu et al. in MN-based immunisation against enterovirus 71 (EV71), the causing agent of hand-footand-mouth disease [313]. Here, the authors used hyaluronan MN arrays to deliver EV71 virus-like particles through the skin, achieving robust and protective immune responses at a 10-fold lower antigen dose in comparison with conventional IM vaccination. Hepatitis B and HIV have also been the focus of dissolving MN vaccine development. In 2012, Pattani et al. described the development of Gantrez ® MNs loaded with a recombinant HIV antigen (gp140) and monophosphoryl lipid A as an adjuvant for TD immunisation [314]. Mice received a total of four vaccine doses (days 0, 14, 28 and 42) in different combinations of administration routes: MN prime and intravaginal boosts, MN prime and IN boosts, all SC injections or all MN administrations. The developed MN arrays were able to prime antigen-specific IgG responses, which increased particularly with IN boosts. This immunisation regimen led to increased serum and mucosal antibody levels, at least similar to those elicited by SC injection, and higher in the case of IgA. Other studies reported the use of dissolving MN arrays to deliver a recombinant human adenovirus type 5 vector encoding HIV-1 gag protein [315,316]. In this case, MN-based immunisation led to potent cytolytic multifunctional CD8 + T cell responses in mice, promoted by a specific subset of DCs present in the skin. The authors also demonstrated that this cellular response was long-lived and retained recall capacity for memory responses up to two years after immunisation. In the case of hepatitis B virus, Qiu et al. described the use of PVP MN arrays for the TD delivery of a plasmid DNA vaccine with or without additional adjuvants such as CpG ODN, cationic liposomes or both [317]. High antibody responses were observed in this immunisation approach, particularly when the antigen was encapsulated in the cationic liposomes and administered with CpG ODN. On the other hand, Perez Cuevas et al. reported the use of CMC MN arrays for HBsAg delivery to mice and rhesus macaques [318]. These MN arrays elicited antibody responses comparable to those obtained with IM immunisation, without any additional adjuvants. More recently, Kim et al. presented a combinatorial approach consisting of a PLA/CMC MN tip loaded with HBsAg for slow release and an antigen-loaded CMC coating for bolus release [319]. The results showed an effective immune priming by the bolus antigen release from the Pharmaceutics 2021, 13, 2091 26 of 48 CMC coating, followed by a boost effect generated by the slow antigen release from the PLA MN tips. Finally, it is worth noting the role played by this type of MN arrays in the endeavours to vaccinate against SARS-CoV-2. In early 2020, Kim et al. reported the use of CMC MN arrays for TD immunisation with recombinant viral proteins from MERS and SARS-CoV-2 virus [320]. In this study, results showed substantial increases in specific antibody levels at two weeks postimmunisation with MN arrays, in comparison with earlier time points. Similarly, Kuwentrai et al. described the delivery of SARS-CoV-2 spike protein's RBD using hyaluronan MN arrays, and including alum as an additional adjuvant [321]. This approach elicited high and long-lasting antibody responses, as well as significant T-cell responses, measured by interferon-gamma (IFNγ) expression. Interestingly, these results were not achieved when the same system was used to deliver mRNA, in an attempt to simulate current SARS-CoV-2 vaccines based on this technology. Implantable MN Arrays In recent years, a new type of biodegradable MN arrays has been studied for longacting drug delivery and, in a few studies, for vaccination purposes. Implantable MN arrays usually consist of slowly degradable needle tips loaded with the antigen or drug of interest and supported by a fast-dissolving backing, which allows implantation of the needle tips inside the skin upon application. In vaccine delivery, this approach is particularly interesting to control vaccine kinetics and antigen delivery to the lymphatics, which can greatly influence the immune response elicited [41]. Chen et al. described the development of an implantable array with chitosan needle tips containing an inactivated influenza vaccine and a fast-dissolving PVA/PVP backing layer [322]. The results showed higher antibody levels in the MN group than those observed in the IM immunisation group, a fact the authors attributed to the adjuvant activity and depot effect of the chitosan needle tips. Moreover, MN vaccination led to efficient protection of mice against viral challenge, in comparison with conventional IM injection. In another study, Boopathy et al. reported the enhancement of humoral immune response against an HIV trimer antigen by vaccination with an implantable MN array [323]. The authors used in this case silk fibroin protein to form the antigen-loaded needle tips, which elicited sustained release in the skin over two weeks. This allowed not only an increased retention of the vaccine in the administration site, but also higher colocalisation of the antigen with follicular DCs in the draining lymph nodes and increased priming of germinal centre B cells, essential in the development of long-lasting antibody responses. One month after vaccine administration, the MNimmunised group showed 1300-fold higher antibody levels than the group receiving a single-dose intradermal injection of the same vaccine, demonstrating the potential of this approach for HIV vaccination. Despite these promising results and the demonstrated potential for MN-based vaccination, it is worth considering some challenges concerning the development of these products at clinical level until market approval. In terms of product development and manufacturing, researchers should consider minimising the number of process steps to facilitate up-scaling and high-quality GMP manufacturing, as well as other possible requirements such as aseptic fabrication, sterilisation at the endpoint and costs of production [53]. Additionally, common vaccine-related issues such as stability in storage and cold-chain requirements must also be taken into consideration at this stage [34]. Achieving stable vaccine formulations in MN array format, which can be stored at room temperature and withstand high temperatures characteristic to certain climates, could be a game-changer in terms of worldwide vaccine coverage and distribution [324]. Clinically Approved Nanovaccines against Viruses Generally, development of vaccines must go through different stages of preclinical and clinical testing in order to gain approval for production and marketing. The first stage of vaccine development journey is the preclinical stage in which the infectious agent is extensively investigated for immunogenic antigens that can trigger immune responses in the host. Outcome of preclinical studies is assessed by the regulator, which will authorise the developer to start clinical trials only if the benefit of the developed vaccine outweighs risks of undesired side effects or toxicity. Clinical trials involve studying the effects of vaccines under development in human subjects over three sequential phases. Phase 1 involves vaccine testing in a small group of healthy adult volunteers to ensure that the developed product is free from major safety concerns and to evaluate dose-ranging and the elicited immune response. Phase 2 trials involve a pilot efficacy study for a larger group of volunteers and to confirm safety. If the vaccine under investigation demonstrates efficacy and low risk of general toxicity, it will enter the third phase of clinical trials. Phase 3 trials involve a much larger group of a wider population range, often tens of thousands of people, involving volunteers from different areas of high viral transmission rates, elderly people, and those with underlying health conditions, in order to confirm safety in these groups, efficacy, and the effective dosing level of vaccine. Successful vaccines from Phase 3 can seek marketing authorisation from regulatory authorities for mass production and marketing. After marketing authorisation, vaccine products will enter Phase 4 of pharmacovigilance, in which they will be continuously and carefully monitored for safety and efficacy [325,326]. In emergencies as in pandemics like the current COVID-19, a vaccine cannot be fully approved and can be developed under "emergency use authorisation" to facilitate its availability and use for mass immunisation, even if it is still under clinical trials [326,327]. There are currently about 320 vaccine products against SARS-CoV-2 under development, with approximately 126 vaccines in clinical trials and 194 in the preclinical assessment [328]. However, only eight candidates are in Phase 4 clinical trials after being developed and marketed to assess their performance in real life scenarios and to detect the long-term effects in the general population [326,328]. Of these eight, only BNT162b2 and mRNA-1273 of Pfizer-BioNTech and Moderna, respectively, are based on LNPs as nonviral vector nanocarriers (Table 1). In spite of the limitations associated with traditional vaccines using live-attenuated or inactivated viruses, such as the time-consuming manufacturing process, toxicity and high infectivity, there are, at present, a limited number of fully approved vaccines against viruses utilising an advanced biomaterial or nonviral nanocarriers for their development and delivery [328,329]. Historically, virosomes and VLP systems showed clinical success for both HAV and HBV vaccines [233,234], as well as influenza [235,238], and more recently with the prophylactic HPV vaccines [134,237] (Table 1). However, the herpes zoster subunit vaccine [HZ/su], Shingrix ® , developed by GlaxoSmithKline, is one of a few nanovaccines approved for clinical administration, which uses liposomes for the delivery of viral antigen cargo (Table 1) [190][191][192]. Therefore, there is still a lot of work to be done for the development of clinically approved nanovaccines, which can satisfy the stringent quality, safety and efficacy requirements of regulators. The new generation of LNP-based SARS-CoV-2 vaccines, which are under Phase 4 clinical evaluation, has opened the door to advanced nanotechnological approaches for the development of other nanovaccines (Table 1) [328,330,331]. LNPs of both BNT162b2 and mRNA-1273, were utilised to encapsulate and deliver the nucleoside-modified mRNA encoding the full-length spike (S) glycoprotein of SARS-CoV-2 virus (Table 1) [330,331]. These vaccines elicited strong humoral and cell-mediated immune responses to the S antigen protecting the host against SARS-CoV-2. The main disadvantage of these vaccines is stability, which requires storage at ultralow temperature from −80 to −60 • C and from −50 to −15 • C, for Pfizer-BioNTech and Moderna vaccines, respectively [330,331]. Also, Pfizer-BioNTech vaccine can be stored frozen at −25 to −15 • C for two weeks only, requiring special transport and storage equipment [330]. On the other hand, Moderna vaccine can be stored refrigerated at 2-8 • C for 30 days [331]. Other SARS-CoV-2 LNP-based vaccines include Cov2 SAM (LNP) vaccine (GlaxoSmithKline, Phase 1 clinical trial), LNP-nCoV saRNA (Imperial College London, Phase 1), LNP-nCOV saRNA-02 vaccine (MRC/UVRI and LSHTM Uganda Research Unit, Phase 1), and HDT-301 vaccine (SENAI CIMATEC, Phase 1) which comprise self-amplifying RNA (saRNA) encapsulated within LNPs, and are all still in early development stages [328]. In addition to the use of nanotechnological approaches in vaccines delivery, other advanced techniques such as electroporation has been used for intracellular delivery of the intra-dermally injected INO-4800 vaccine (Inovio Pharmaceuticals (San Diego, CA, USA)/International Vaccine Institute (Seoul, Korea)/Advaccine Biopharmaceuticals Suzhou Co., Ltd. (Suzhou, China), Phase 2/3 clinical trials) [328,332,333]. INO-4800 is a DNA vaccine which contains the plasmid pGX9501 encoding SARS-CoV-2 full length Spike glycoprotein. This vaccine utilises a small electric pulse generated from a hand-held smart device to reversibly make small pores in the cell membrane, promoting plasmid cellular transfection and activating immunotherapy. The vaccine was reported to be immunogenic, generating robust humoral and cellular immune responses, and was shown to be safe and well-tolerated [332,333]. Compared to other SARS-CoV-2 vaccines, INO-4800 was reported being stable at room temperature for more than a year, not requiring special freezing conditions during shipment and storage [333]. Other examples of electroporationbased vaccines under clinical investigation include the VGX-3100 synthetic DNA vaccine targeting HPV16/18 (Phase 2b trial) [334], a CMV DNA vaccine (Phase 2 trial) [335] and few others in Phase 1 trial reviewed by Xu et al. [336]. Microneedle-based technologies for TD vaccination are also under clinical development at the moment. For instance, a Phase 1/2 double-blind randomised trial sponsored by Micron Biomedical, Inc. (Atlanta, GA, USA) currently recruiting participants (NCT04394689), in which the safety and immunogenicity of a measles-rubella dissolving MN array vaccine will be evaluated in adults, toddlers and infants in comparison with SC administration of a WHO prequalified vaccine [337]. Previous studies have also demonstrated the efficacy of MN-based vaccines at the clinical level, particularly with an influenza vaccine coated MN array developed at Georgia Institute of Technology by the Prausnitz group [305]. The use of nanomaterials and other advanced technologies is indeed a potential approach for the effective and safe delivery of vaccines based on viral genetic materials, which is currently being rigorously tested for SARS-CoV-2, but could be generally applied to other vaccines in the near future. However, there are emerging challenges for the regulatory clinical and marketing authorisation of nanomaterials-based vaccines that need to be considered from early stages of development, which are discussed in the following section. Manufacturing Consideration and Regulatory Requirements for Pharmaceutical Development of Nanovaccines A vaccine product is often engineered with four main components: an immunogen that can induce an immune response derived from the pathogen (proteins, peptides, lipids, mRNA); adjuvants, which are stimulatory agents that potentiate the immune response towards the delivered immunogen (independent or as a conjugate to the immunogen); delivery strategy, which utilises nanocarriers to encapsulate or present the immunogen to APCs in a stable and targeted manner (for instance, viral vectors, nanocarriers or hydrogels); and finally, a device designed to physically administer the vaccines (syringes, implants, microneedle patches) [29,329,338]. There are various quality, safety and efficacy regulatory requirements for the development and production of these complex biopharmaceutical products, from early stages of development, passing through the various stages of manufacturing, ending by storage and distribution. Here, we will shed the light on these regulatory requirements, mainly for the upstream and downstream processing, highlighting special considerations for nanovaccines formulation development. Upstream Processing The manufacturing process of nanovaccines can be considered identical to that of conventional vaccines for upstream processing. Once the fabrication method of the immunogen has been selected (live-attenuated, Inactivated/killed, subunit, conjugation, recombinant, recombinant vector or VLP), the Master Viral Seed (MVS) can be produced and extensively characterised to ensure purity and safety [339]. Depending on the starting material, primary cell lines, continuous cell lines or chicken eggs can be used as substrate to grow the virus. These production platforms each have their own advantages and drawbacks. Thus, during development and selection, the following points should be considered: yields, effects of post-translational modifications, cost, contamination risks, scalability, complexity, production timescale, glycosylation profiles and frameworks for regulatory approval [340][341][342][343][344]. From this initial bank, Working Viral Seeds (WVS) are propagated for production lots to ensure consistency and confidence in the quality of the final product. WVS can be considered intermediate products, as multiple strains or different subunits can be blended in the final bulk during formulation. The use of master and working seed lots provides a method to limit the replication of the seed and to minimise the possibility of genetic variation. This is followed by the isolation of the immunogen from its environment, generally via centrifugation and homogenisation [345]. Further manipulations of the material, including encapsulation into a nanocarrier can be considered as downstream processing as the immunogen is unaltered after this point. Downstream Processing Once the immunogen is in its free form, the material can be purified using one of the following methods: sterile filtration [346], solvent extraction [347], alcohol precipitation [348], ultrafiltration [349], gel permeation chromatography [350], zonal centrifugation [351], formaldehyde inactivation [352], diafiltration [353], detergent precipitation [354] and various other methods of chromatography. Depending on the selected nanocarrier delivery strategy an appropriate encapsulation method is utilised. For liposomal manufacturing, the three basic techniques include: (1) mechanical methods such as film hydration and microfluidisation, (2) solvent displacement such as ethanol injection and reverse phase evaporation and (3) detergent depletion methods [355]. Several other methods can be used for the production and incorporation of the immunogen; however the following points should be considered during the selection process: • Scalability-small scale laboratory research should have the capability to be easily scaled to meet market requirements, taking into account technological limitations; • Use of organic solvents-most recognised methods use organic solvents; however, due to their detrimental effect on health, they need to be limited to minor amounts of class II solvents such as chloroform and methanol to meet European and US pharmacopeial requirements; • Consistency-As the utilisation of nanocarriers increases the surface area this has an effect on the biodistribution profile leading to unpredictable reactivity. To limit the chances of any unwanted reactivity it would be important to characterise and control the physicochemical properties (size distribution, charge, lamellarity, entrapment efficacy, phase transition temperature, antigen release profile) between batches; • Temperature-Most immunogens are only stable at lower temperatures; hence, any methods that require higher temperatures cannot be utilised. Once the immunogen has been loaded into its nanocarrier, size manipulation is often carried out using homogenisation, which applies shear forces to achieve uniform size distribution [356]. These factors are often controlled using the pore size of the filter and the number or recirculation cycles. At this point, intermediate products can be stored at low temperatures, depending on the results from their stability testing. Hence, a point to consider during the validation and qualification of the vaccine and the method of manufacture should be the stability of the nanocarriers at different temperatures over various time points. For both the intermediates and the final bulk, stability tests are required for physicochemical analysis and biological assays. The implementation of a stability protocol should be based on detailed information about the types of testing, including specifications, testing intervals and data analysis. Formulation Considerations Suitable controlled quantities of all ingredients are blended to uniformity to produce the final bulk, this may include; buffers, bulking agents, stabilising excipients, adjuvants and preservatives. As with the development of any pharmaceutical product, the addition and concentration of each excipient needs to be meticulously justified to the regulators and sufficient relevant data should be provided. If the intermediate production had a sterile filtration step during downstream processing the bulk would need to be prepared aseptically, otherwise a sterilisation step is required at the end of processing. If the vaccine product is to be commercialised as multistrain, multiple intermediates containing different WVS can be blended. However, before the blending process can begin, the intermediate endotoxin and potency should be tested. The most commonly recognised tests in industry are Limulus Amebocyte Lysate (LAL) for endotoxin evaluation [357] and plaque formation assays, endpoint dilution assays (tissue culture infective dose TCID 50 ) virus neutralisation assays or quantitative polymerase chain reaction (PCR) assays for potency [358]. The purpose of this evaluation before the blending process is that intermediates with high potencies or endotoxin can be blended with intermediates with low potencies or endotoxin to meet the specification set out in the license. The effect of the nanocarriers should be considered during product development for these specifications, as they may mask the accuracy of the result. Most nanocarriers exhibit the following three-phased release profile: (1) burst release due to the desorption of molecules attached to their surface; (2) an intermediate phase which is released as the matrix of the carrier degrades; and (3) the final release of the encapsulated material [359]. Hence, the release profile should be accurate characterised to precisely design a potency and endotoxin assay. During formulation development the interactions of the components should also be meticulously studied to design nanovaccines with optimum activity. These interactions include, but are not limited to, immunogen-nanocarrier interactions and adjuvant-nanocarrier interactions. Immunogen-nanocarrier interactions could include the altered antigen presentation to APCs which can either be enhanced or inhibited. For example, poly(vinylalcohol)coated iron oxide nanoparticles have been shown to inhibit the processing of OVA by DCs to stimulate CD4 + T-cells [360], whereas, poly(D,L-lactic-co-glycoside)-based polymeric nanoparticles significantly improve antigen presentation and T cell activation by OVA in DCs [361]. Adjuvant-nanocarrier interactions should also be extensively studied to avoid any hyperactivation of the adjuvant in the presence of the nanocarrier. Several vaccines have been known to be recalled from the market due to adjuvant toxicity resulting in the stimulation of CD28 on T cells triggering hypercytokinemia [362]. Furthermore, as nanocarriers could have unpredictable biological activities in a host per se, considerations should be taken to study their effect in clinical studies compared to the bulk material. The major nanotoxicity pathways in cells could involve oxidative stress [363][364][365], inflammation [366][367][368] and genotoxicity [369][370][371], which therefore requires extensive preclinical and clinical toxicity assessment of nanocarriers used for vaccine formulations. Quality Control and Release Testing As with any other biopharmaceutical product, manufacturers of nanovaccines are bound to perform appropriate tests for the licensed specification according to 21 CFR 610 for the US markets [372]. For nanotechnology enabled products these specifications are assessed on a case-by-case basis due to the lack of uniformity between regulators. In general, the release lots of the final product must meet the standards of safety, purity and potency established for the particular vaccine product highlighted in ICH Q5C [373]. Examples of these test include, but are not limited to potency assays, general safety (detection of adventitious agents), sterility, bacterial endotoxin, purity, residual moisture, pyrogen, identity and constituted materials. Samples should be taken throughout the manufacturing process to maintain and document quality control of the processed batch. However, for general safety and bioburden testing, the samples should be taken at the "dirtiest" step of the manufacturing process to demonstrate the absence of contaminants in the product. Regulatory Requirement and Challenges Although it has been over 15 years since the first protein-based nanoparticle drug Abraxane ® (albumin-bound paclitaxel) was approved for use in 2005 by the FDA [374], nanomedicines continue to challenge the regulatory authorities. The main role of the regulator is to ensure quality, safety and efficacy of all medical products and devices through existing, well-defined regulatory frameworks; however, as both the scientific innovations and market expectations evolve, it is becoming increasingly difficult to set specific guidelines. A lack of guidelines and harmonisation from these regulatory bodies is causing a high degree of uncertainty for product developers, hindering the development and marketing of novel nanotechnology-enabled products. To ensure a smooth approval process, the identification of, and a general agreement on, the regulatory requirements applicable for the evaluated product/device are therefore prerequisites. Thus, nanotechnology enabled products are often developed and scaled-up with involvement from the authorities. Utilising current regulatory frameworks, the approval process involves the requirement of four principle elements outlined in Table 2. Generally, regulators require release and stability data from pilot batches, which are prepared using the same process as that of the intended product for the market. In addition, they also require excipient information, detailed rationale of the manufacturing process (purification, sterilisation . . . etc.) and validation methods for the release tests used including a justification for their use. A justification should also be provided for the nanocarrier utilised in conjunction with preclinical data established in an animal model characterising the immune response for each of the component parts (adjuvants/nanocarriers) as well as the final vaccine composition. In addition, the toxicology results for the clinical trials would also be essential to establish safety. These results should include data related to local tolerance and repeat dose toxicology performed at preclinical settings. Principle Elements Requirements Preparation of preclinical materials Proof of concept testing in animal models Manufacture of clinical material in accordance with cGMP Toxicology investigations in an applicable animal model Investigational new drug submission Application for regulatory review Safety and efficacy testing Clinical and nonclinical studies Biologics license application to regulators for final review and licensure Submission of clinical, nonclinical and manufacturing data Conclusions and Future Perspective In conclusion, a carefully designed delivery system for vaccines, together with the choice of a pertinent administration route, are crucial for both enhancing of immunisation effectiveness and improving vaccine stability, and can help managing dose frequency, patient convenience and logistics for mass immunisation. In essence, viral vectors provide a method to deliver genetic material encoding the pathogen's antigen directly to the host cell, which when produced in the host mounts an immune response. This capitalizes on the vectors' specificity in infection and can lead to the production of high concentrations of the target antigen. Viral vectors therefore have the advantage of eliciting strong cellular and humoral immune responses. Although pre-existing immunity to the vector may dampen the response, and the safety of replication-competent vectors is questioned, the potential for their use as vaccine delivery mechanisms is promising. Besides viral vectors, there has been significant progress in developing nonviral vector platforms for vaccine delivery, in recent years. Nanocarrier systems, such as liposomes, virosomes and VLPs have made their way to the market. These systems have been shown to prevent premature antigen release and to prolong antigen presentation for potent immunity against viral diseases. However, the majority of other nanocarriers discussed here remain in the early development and preclinical testing stage. Their ultrasmall size and large surface area can lead to aggregation, and hence, concern over the toxicity and safety of these carriers for clinical use. Therefore, better understanding and knowledge in this regard is essential for developing delivery vehicles with clinical potential. Advanced delivery systems, like hydrogels and microneedles, have also shown a great potential for localised immunisation. The use of hydrogels, both polymeric and peptide-based, has been shown to be successful strategy for the localised delivery of viral antigens in preclinical studies, thanks to the highly viscous, shear thinning, thixotropic and mucoadhesive properties of these material. These vehicle attributes enable the development of injectable and sprayable formulations capable of forming a viral antigen depot at the site of administration, leading to enhanced activation of APCs, and thus improving both humoral and cellular immune responses over a prolonged duration of action. Hydrogels can act both as vehicles for viral antigens and as immunogenic materials when functionalised with adjuvants. Therefore, we expect to see hydrogel-based viral vaccine formulations for both mucosal (IN) and parenteral (IM, SC and TD) immunisation approved for clinical use in the near future. Microneedles have also been widely studied now, both at preclinical and clinical level, for vaccine delivery purposes. Promising results have been obtained with these prototypes, with comparable efficacy, safety and stability being achieved with MN arrays in comparison with IM vaccine administration. The potential of these systems to facilitate mass vaccination programmes, particularly in low-resource settings, and to promote self-administration of vaccines in contexts where it is not desirable for people to physically go to a healthcare setting for this purpose, such as a pandemic situation, is truly massive and could significantly impact vaccine distribution and coverage in the next few years. Nevertheless, it is still necessary for the scientific and regulatory experts to overcome certain hurdles concerning mass production, standardised characterisation and reproducibility of these devices before they can reach the market and have the expected impact. Despite significant recent advancements in nanomedicine and biotechnology, there are still a limited number of fully approved nanovaccines against viruses at present. The hybrid systems virosomes and VLPs, which are 'liposome' like structures decorated with viral proteins, are by far the most clinically successful nanovaccine products; for example, Epaxal ® (HAV), Recombivax HB and Engerix-B (HBV) vaccines, as well as the influenza vaccine Inflexal ® V, have all been clinically approved and widely used since late-1990s. More recently, the prophylactic HPV vaccines Gardasil ® , Gardasil-9 ® and Cervarix ® , also based on virosome and VLP nanocarriers, were approved, Additionally, a closely related liposome-based Herpes Zoster nanovaccine, Shingrix ® , was fully approved by the FDA in 2017 for patients older than 50 years. In 2020, the first LNP-based nanovaccines against SARS-CoV-2, BNT162b2 and mRNA-1273, have been granted emergency use authorisation to contain the COVID-19 pandemic and are still under Phase 4 trials to assess the longterm effects of these products. Development of nanovaccine products that can meet the stringent quality, safety and efficacy requirements of regulators is indeed a challenging process, which can be attributed to the complex nature of these multicomponent products. Although nanocarriers have proven to enhance immunisation efficacy of vaccines, safety and stability profiles for both nanocarriers and antigenic elements should be carefully scrutinised in light of the relevant regulators' guidelines. However, there is a lack of harmonisation for regulations of nanotechnology-enabled products and related advanced technologies/devices from these regulatory bodies, causing uncertainty for product developers and hindering the development and marketing of nanovaccine products. Therefore, most clinically approved nanovaccines were developed with direct involvement of regulators from early development stages to identify and agree on the regulatory requirements for the developed product, which is case-by-case due to the very complex and unique nature of individual nanovaccine products. However, agreement on general regulatory guidelines for the quality, safety and efficacy of nanovaccines, including nanospecific testing considerations, will ensure a smooth approval process for safe and effective products.
24,194.8
2021-12-01T00:00:00.000
[ "Medicine", "Engineering", "Materials Science" ]
Versatile Microfluidics for Biofabrication Platforms Enabled by an Agile and Inexpensive Fabrication Pipeline Microfluidics have transformed diagnosis and screening in regenerative medicine. Recently, they are showing much promise in biofabrication. However, their adoption is inhibited by costly and drawn‐out lithographic processes thus limiting progress. Here, multi‐material fibers with complex core‐shell geometries with sizes matching those of human arteries and arterioles are fabricated employing versatile microfluidic devices produced using an agile and inexpensive manufacturing pipeline. The pipeline consists of material extrusion additive manufacturing with an innovative continuously varied extrusion (CONVEX) approach to produce microfluidics with complex seamless geometries including, novel variable‐width zigzag (V‐zigzag) mixers with channel widths ranging from 100–400 µm and hydrodynamic flow‐focusing components. The microfluidic systems facilitated rapid mixing of fluids by decelerating the fluids at specific zones to allow for increased diffusion across the interfaces. Better mixing even at high flow rates (100−1000 µL min−1) whilst avoiding turbulence led to high cell cytocompatibility (>86%) even when 100 µm nozzles are used. The presented 3D‐printed microfluidic system is versatile, simple and efficient, offering a great potential to significantly advance the microfluidic platform in regenerative medicine. Introduction 10] However, challenges remain including, capacity for mixing fluids of different viscosity and composition, affordability and versatility.Efficient mixing of different fluids in microfluidics for dilution, homogenization and reactivity is a significant challenge due to the lack of turbulent flow within narrow channels at typical flow rates (10-100 μL min −1 ).Thus, mixing relies solely on diffusion across the fluid interface, which can be ineffective. [4,11]][14] Passive and active mixer configurations overcome these drawbacks at low flow rates (i.e., 1-10 μL min −1 ) due to diffusion at the boundaries of the fluidic layers, however achieving complete mixing at higher flow rates (i.e., 100-1000 μL min −1 ) remains a challenge. [12,15]Therefore, new and effective micromixer designs are crucial to achieve complete mixing over a broad range of flow rates. Photo-or soft lithographic processes are traditionally used to fabricate microfluidic devices from polydimethylsiloxane (PDMS). [16]Whilst these processes are well established, [17,18] they involve a series of manufacturing steps, impacting their wider adoption since they are difficult to automate, time-consuming, resource-heavy and non-agile. [5,19]All of these factors drive up costs, with individual chips commonly costing over US$ 200, [20,16] even before considering cleanroom fees.As a result, most microfluidics are typically produced in small quantities in laboratories; [21] with increasing emphasis on translation and low-cost microfluidic devices, lithographic fabrication methods present a "manufacturability roadblock". [22]Others have employed AM, laser ablation and capillary drawing to produce glassbased microfluidic devices.Such devices have been utilized in several biomedical applications, [23,24] however, wider adoption has been limited due to drawn-out steps involved in device fabrication, higher cost of raw material, fragile and the need for sealing parts. [25]eadily accessible additive Manufacturing (AM) platforms offer a compelling solution to simplify microfluidic device fabrication to a single-step platform, whilst also reducing the cost of device fabrication.Recent developments in AM technologies and custom toolpaths have generated new opportunities to capitalize on the fabrication of high-value functional parts including microfluidic devices. [26,27]Material Extrusion Additive Manufacturing (MEAM), also referred to as Fused Deposition Modelling (FDM) and Fused Filament Fabrication (FFF), [2,13,[27][28][29] stereolithography (SLA) [30,31] and inkjet technologies [30,32] are now commonly used to fabricate microfluidic devices.Of these, MEAM is the most affordable (price per device). [33]Moreover, its applicability to a wide selection of materials with minimal postprocessing steps, makes MEAM an ideal choice for assembling low-cost microfluidic devices. [30,33]everal studies have demonstrated the use of the MEAM technique for fabricating microfluidic devices, either by direct printing [5,29,34,35] or indirect printing [2,28] using MEAM parts as a sacrificial template by removing the features embedded in the matrix of choice. [28,36]The current state-of-the-art MEAM microfluidic devices employ very simple designs for passive mixers, e.g., straight Y-channels and serpentine channels, [35,36] or are formed by soldering together extruded filaments, [28] which compounds effective mixing.Although some studies [2,37] have used the inherent ridges or slanted walls created during 3D printing to improve mixing efficiency, these ridges can result in particles and fluids stagnating inside the channels, causing them to be permanently stuck, damaging encapsulated cells during the process, or trapping them irrecoverably. [5]Current MEAM microfluidic devices also suffer from low optical transparency, low resolution, difficulties in achieving leak-free and seamless structures, and poor surface finish (R a ≈ 10.9 μm vs 0.35 μm for laser-based AM).Limited capabilities to create complex passive mixers to obtain homogeneous solutions, [27,38] further hamper their widespread application and translation.These limitations arise from the CAD models employed in device fabrication; these typically slice the model into thin layers, generating a toolpath per layer to start the printing process. [26,39]We have previously shown that this slicer software prevents the full potential of MEAM printers to be realized, since it considers each extruded filament to have a constant aspect ratio; the part is effectively filled by positioning filaments side-by-side (or according to the chosen infill pattern). [39]The Continuously Varied extrusion (CONVEX) method directly addresses these limitations. [39]This design approach enables the production of intricate and seamless structures without the defects and voids that are found in comparable structures printed by slicer software. [39]ere, we report an innovative manufacturing pipeline, developed for rapid fabrication of versatile and inexpensive microfluidic devices with channel widths ranging from 100 μm to several mm, integrating four seamless passive mixer geometries and a flow-focusing component, for use in biofabrication of complex fibrous architectures mimicking blood vessels.The mixing properties of the MEAM microfluidic devices were examined experimentally from 1-1000 μL min −1 and correlated with computational fluid dynamics (CFD) and real-time imaging using light sheet fluorescence microscopy (LSFM) to select the optimum design.The potential of 3D bioprinting with high cell survival for the resulting microfluidic devices was demonstrated via extruding cell-laden bioinks containing SaOS2 cells.This manufacturing pipeline opens up the development of a new generation of microfluidic devices with a wide range of applications in delivery and regenerative medicine. Multi-Purpose Microfluidic Device-Manufacturing Platform Herein, versatile microfluidics devices, capable of producing multi-material fibers and droplets with complex architectures, have been developed employing an inexpensive MEAM 3D printer coupled with an innovative and easily adoptable fabrication approach (Figure 1).The process involves precise control of the 3D printing process employing the CONVEX design approach (Figure 1a[i]).Notably, CONVEX allows the fabrication of complex variable-width channels and seamless structures which cannot be accessed with conventional "slicer-based" approaches (Figure 2).Thus, the devices developed give rise to reliable and predictable flow patterns and profiles. The fabrication approach involved the use of a MEAM 3D printer (cost US$ 300) to extrude a continuous single layer of acrylonitrile butadiene styrene (ABS) channels with extruded filament width and layer height of 400 and 300 μm, respectively (Figure 1a[ii]).The resulting single layer printing strategy improved the surface finish, which was comparable to that of an injection-moulding polymer (see Section 2.2).The resulting channels were embedded into a PDMS matrix (Figure 1a[iii]), and after curing, the PDMS-ABS device was flushed with acetone to dissolve the ABS to reveal PDMS microfluidic devices (Figure 1a[iv]).Key advantages of this novel MEAM-enabled Figure 1.An overview of the new manufacturing pipeline to produce MEAM-enabled microfluidic devices.a) Direct controlling of the 3D printer enables (i) implementing complex passive mixer designs using the CONVEX design approach to (ii) fabricate a single layer of ABS filament with defined cross-sectional area.(iii) The MEAM microchannels were embedded into a PDMS matrix to be cured.(iv) The cured PDMS microfluidic devices were flushed with acetone to dissolve the ABS.bi−iii) The CONVEX design approach achieves highly reproducible microchannels with widths ranging from 100−400 μm (<10% deviation from design widths) and (iv) low surface roughness.ci−iv) All four passive mixer designs could be a modular unit to provide flexibility and scalability with high mixing properties based on (v) experimental, (vi) computational fluid dynamics (CFD) and (vii) real-time imaging.d) The potential applications of the MEAM-enabled microfluidic device in regenerative medicine: (i) in situ mixing of two fluids to extrude solid fibers, (ii) formation of uniform droplets, (iii) multi-material extrusion with dynamic control of channel widths and (iv) fabrication of anisotropic multi-layer structures with defined diameters and shapes.30 wt.% Pluronic extruded through 400 μm channel to fabricate 2D grid-like structure to demonstrate the potential of MEAM-enabled microfluidic device in 3D bioprinting.microfluidic chip include its adaptability and ease of use enabling complex structures to be produced with minimal effort. A key breakthrough enabled by the CONVEX design approach is the ability to create complex channel profiles with constant and/or variable widths, ranging from 100 μm to 1 mm (Figure 1b,c), with high reproducibility and more importantly more importantly <10% deviation from the design widths.In addition, two orders of magnitude lower surface roughness than the typical values [14] reported for material extrusion additive manufacturing platforms (Figures 1b[iv]; Figure S1,S2, Supporting Information) have been achieved here.This achievement is primarily due to total control on print-path planning within in the CONVEX approach. Five complex 2D and 3D passive mixer geometries (Figure 2c; Figure S4, Supporting Information) are 'modular' therefore these can be repeated numerous times over arbitrary lengths in various orientation and order, making them versatile and attractive for numerous applications.This feature lends itself to rapid development of microfluidic devices at a fraction of the cost compared to traditional lithographic methods. Previous studies [2,30] highlighted that high surface roughness induced by MEAM process leads to irregulates and causing turbulent flow, thus directly influence mixing performance.To this end, the low surface roughness delivered by the single layer printing strategy contributed to minimal influence on mixing index calculations, allowing an investigation of the effect of passive mixer designs on the mixing efficiency.The mixing efficiency of four different passive mixers (i.e., zigzag, V-zigzag, hex, and diamond) was evaluated by flowing two fluids (water dyed blue and yellow) and calculating the mixing index values (Figures 1c[vi], 3; Figure S5, Supporting Information).The V-zigzag passive mixer afforded complete mixing at the shortest distance that has been reported with such devices (Figure 3; Figure S5, Supporting Information).The mixing efficiency of the V-zigzag and straight channel devices were further evaluated using CFD (Figure 1c[vii]) and light-sheet fluorescence imaging (Figure 1c[viii]).The variable-width feature of the V-zigzag micromixer achieves better mixing of fluids at higher flow rates (100 -1000 μl min −1 ) compared to channels with constant width by decelerating the fluids at the wide sections of the channel allowing fractionally longer time (1% higher) for diffusive mixing. The high mixing efficiency and versatility offered by the fluidic chips allowed the fabrication of a range of complex structures for application in regenerative medicine (Figure 1d), including: 1) Solid fibers: In situ mixing of sodium alginate with calcium chloride to extrude fibers (Figure 1d[i]).2) Droplets: Droplets were formed via water-in-oil microspheres.These chips could be readily adapted for single-cells analysis, materials synthesis, and chemical reactions depending on the specifications and requirements. [5,46] Hybrid fibers with multi-materials: Different fluids could be flowed via flow focusing components to produce hybrid fibers with varying widths which can be modified dynamically (Figure 1d[iii]).4) Hollow and core-shell fibers: Fibers with complex core-shell and hollow architectures with defined diameter, length, and geometry to replicate the complexity of blood vasculature (Figure 1d[iv]). To further demonstrate the widespread application of this newly developed MEAM-enabled microfluidic in 3D bioprinting, a grid-like structure was 3D-printed (printing time of 1 min and 35 sec) by extruding 30 wt.% Pluronic through the microfluidic device consisting of 400 μm channel width.In the following sections, the device structural, fluid properties and cytocompatibility are discussed. Design Freedom Offered by CONVEX Slicer software such as are typically employed in MEAM 3D printing pipeline to generate the print-head travel pattern to fill a CAD-model volume with stacked extruded filaments of constant diameter.The variable width zigzag mixer (V-zigzag, Section 2.3) cannot be fabricated due to travel movement set by the slicer software (Figure 2a[i] red lines).To further demonstrate the limitations of slicer approach, diamond passive mixer was produced using slicer (Figure 2a[iii]) and CONVEX (Figure 2b[iii]) approaches.This is a result of the ability to explicitly design the toolpath travel at sub-filament scale to precisely vary the geometry of each filament over its entire length (Figure 2b).The resulting 3D-printed part (Figure 2b[i,ii]) demonstrates the capacity of continuous extrusion to create specific regions on the zigzag channels that were 1.5× wider than the rest of the channel.This change in width along the channel was readily achieved by controlling simultaneously the extrusion rate and printing speed, which allows the polymer to spread to the desired width, while the height of the filament is kept constant. The CONVEX enabled 3D-printed parts demonstrate high fidelity (Figure 2c) where a linear relationship between design and actual widths was found.For the 200 and 400 μm channels, variation between design and actual widths was <8%.This difference increased to 9% when 100 μm channels were printed.In comparison, a 50% difference was observed with print outcomes generated with the slicer-based approach. [2,5,14]The large variation in channel width obtained with slicer software assisted prints is due to under-and/or over-extrusion. [43,44]Furthermore, print success rates (based on printing accuracy and quality) for the microfluidic devices with CONVEX approach and slicer software was 95% and <70%, respectively.Thus, the microfluidic devices derived from CONVEX will result in better outcomes as well as reproducible mixing leading to reliable scaling and wider adoption. The surface roughness (R a ) of 3D-printed channels for microfluidics is often reported to be high (10.91-11.41μm) due to layer-wise production of MEAM microfluidics based on slicer software. [2,45]Whilst our CONVEX deployed approach resulted in low R a values (V-zigzag R a = 0.16 ± 0.02 μm, Figure 2d; Figure S1,S2, Supporting Information) close to those for lithographic approaches (0.065-0.10 μm).These findings are significant since one of the major current limitations of slicer-based microfluidic devices is the seams resulting from layer-wise manufacturing approach, which can limit the optical performance of structures [14] and, more importantly lead to material accumulation and particle/cell sedimentation within in the channels. [43]To demonstrate the feasibility of further reducing the surface roughness, exposure of MEAM channels to acetone was explored (see Figures S2,S3, Supporting Information) with the treatment producing surface roughness of 0.16 ± 0.07, 0.17 ± 0.05, 0.13 ± 0.04, 0.14 ± 0.02 μm for hex, diamond, zigzag, and V-zigzag. This section demonstrated the capacity of CONVEX to produce microfluidic devices with high accuracy and complexity in comparison to slicer software-based approach (Figure 2).Thus, characterization of microfluidic functional properties was performed only on the devices fabricates with the CONVEX approach. Novel Variable-Width-Zigzag Mixer Gives Efficient Mixing: Experimental and Simulation Studies Increasing efforts are being directed to develop new ways in which mixing efficiency can be enhanced.We examined the ef-fect of precisely varying the channel width along the fluid flow (V-zigzag) on mixing efficiency compared to conventional designs and straight channel as a control.To this end, water dyed blue, and yellow were flowed through the channels to calculate the mixing index as a function of distance for four mixer designs and straight channel with flow rates ranging from 1-1000 μl min −1 (Figure 3; Figure S5, Supporting Information).Vzigzag design outperformed others with complete mixing in the shortest distance (15 mm) compared to other designs (40 mm) over a broad range of flow rates (Figure 3d; Figure S5, Supporting Information).Irrespective of Reynolds numbers and across flow rates over three orders of magnitude, the V-zigzag effected complete mixing (Figure 3d), highlighting the influence of designed wide segments on mixing efficacy.even though lower surface roughness of MEAM-enabled microfluidic devices was previously [2,4,13,30] reported to negatively affect the mixing performance.In comparison, the microfluidic zig-zag mixer with straight channels (Figure 3a) had the highest mixing index of 91% at the lowest flow rate of 1 μl min −1 due to the increased time for diffusion as previously reported by Karthikeyan et al. [44] Increasing the flow rates, progressively reduced the time for diffusion.For the other designs (e.g., diamond, hex and zigzag), three scenarios were observed.First, in the region with 0.047 < Re < 1.19, complete mixing was achieved with no significant variation between the designs.These results suggest mixing is heavily influenced by diffusion and less by geometry.At higher Re numbers, mixing performance progressively reduces due to shorter diffusion time for the zigzag (by 42%), hex (by 18%) and diamond (by 36%) designs, with a transition point observed consistently around Re = 3.When the flow rate was increased beyond this transition point, the mixing performance for the diamond design continued to worsen (reduced by a further 18.1%).These observations are consistent with earlier studies [14,45,46] that highlight the adverse effect of high flow rates on mixing time of two fluids. [14,45,46]By contrast, the other passive mixer designs show a recovery in mixing efficiency, suggesting these channel geometries can be used to affect mixing.The recovery of mixing performance was previously reported by Tsai et al., [12] who also identified a transition point (around Re = 15) for zigzag channels fabricated by lithography with a channel width of 100 μm.These results confirm the V-zigzag passive mixer to give complete mixing possibly by deaccelerating the fluid at wide sections of the channel to allow time for diffusion.This was also the case when dissimilar fluids were flowed together, leading to mixing of sodium alginate and CaCl 2 solutions (Figure 3d) or complete mixing of different viscosity fluids such as water and glycerol (0.001 vs 1.41 Pa.s; Figure 3e). The V-zigzag design outperformed existing state-of-the-art MEAM microfluidic devices, even at high flow rates (100-1000 μl min −1 ), where achieving complete mixing has proven challenging for other MEAM systems (15 mm in present study versus 24-197 mm in other studies with comparable channel widths). [2,4,13]Computational fluid dynamics (CFD) simulations were performed to understand the underlying mechanism for the rapid mixing properties of the V-zigzag compared to the zigzag design (Figure 4).The experimentally determined mixing index values (Figure S5, Supporting Information) were validated by CFD simulations (Figure S6, Supporting Information).The velocity streamlines for the V-zigzag and zigzag designs at a flow 4c which shows a parabolic velocity profile across the width of both channels with fluid travelling 2x slower within the center of the V-zigzag design compared to the constant-width zigzag design.The slowing down of the fluid is to be expected with the velocity decreasing to maintain the same volumetric flow rate across the widening channel.We hypothesize that fluid moving slower within this wider region, leads to a longer total travel time, allowing greater diffusion and therefore, enhanced fluid mixing.Light-sheet fluorescence microscopy (LSFM) was employed to image the flow profile within the channels.In LSFM, the illumination and detection paths are arranged in a 90°configuration, as shown in Figure 4d.This method enables an optical sectioning capability at high-speed, providing high spatial.and temporal resolution at the same time.[49][50][51] For our purpose, a custom-made LSFM was used (Figure S7, Supporting Information) to image the Vzigzag channels at 100 Hz while flowing water and 7 μm fluorescent beads (Figures 4f-h).The beads were detected and tracked with the Fiji plugin Trackmate, [52,53] and their tracks are shown in Figures 4f-h, where the color-coding is representative of the bead speed.Within the wide sections of the V-zigzag mixer, beads appear as streaks(Figure 4f) due to the speed of flow exceeding the ratio of the field of view to exposure time.Near the channel walls the beads are more clearly resolved highlighting the beads travelling at a slower velocity in these regions.The reconstructed bead trajectories color-coded according to the mean velocity for the fluid are shown in Figure 4h.Beads at the center of the capillary exhibit the highest velocities and a parabolic velocity profile matching that of CFD data for V-zigzag.The velocity across the width calculated from CFD -2.1 to 10.5 mm s −1 -are also similar to those measured from LSFM imaging (Figure 4h).This, once more, support the hypothesis that the better mixing performance for the V-zigzag design is caused by the slowing down of fluid in wide sections.Although larger channel width will also slow down the velocity, and increases the time for diffusion, however, in order to achieve complete mixing diffusion across the larger channel radially will take much longer and thus long channels may be required.Consequently, using larger channels with constant width are more likely to have an opposite effect on rapid mixing.Furthermore, LSFM imaging shows the bead trajectories to change significantly on entering and exiting the wide sections of the V-zigzag (Figure 4h) suggesting that fluid mixing may also occur via mechanisms other than diffusion alone. A summary of the mixing index, geometrical and physical properties and the average cost of production (cost per device and cost of printer) of the V-zigzag microfluidic and existing AM microfluidic devices in the literature [2,4,13,30] is presented in Table 1.Our study outperforms in almost all categories (i.e., mixing efficiency, surface roughness and complexity and affordability) whilst also being transparent.Most studies which have demonstrated complete mixing, only measure the mixing properties at low flow rates (<1000 μL min −1 ) for large channel widths (600-900 μm).As a result, one of the motivations in this study was to produce microfluidic devices with dimensions as close as possible to conventional devices (i.e., 100-400 μm).More importantly, it is critical to ensure that the 3D-printed microfluidic device can operate at high and low flow rates for various applications including microfluidic filtration and cell culture, respectively. [57]hus, a single microfluidic device that has the capacity to operate over a wide range of flow rates gives benefit. Fabrication of Biomimetic Helical-Core Hydrogel Microfibers There is a growing interest in developing bioinspired helical structures for mimicking tissues such as spiral arteries of the placenta and kidney.However, their fabrication at the micro-scale with precise control of size remains a challenge.Here, CON-VEX approach enabled the fabrication of microfluidic devices containing a flow-focusing element (Figure 5a,b) to produce hollow helical fibers (Figure 5).The flow-focusing component enabled the core fluid (2 wt.% CaCl 2 in 30 wt.% Pluronic) to be concentrated in the middle of the microfluidic channel between the shell fluids (2 wt.% sodium alginate) introduced through either side of the core.At the interface of the two fluids contact-gelation of sodium alginate occurred via divalent cationic crosslinking, whilst the majority of the sodium alginate within the microfluidic remined a solution and was only crosslinked once extruded into a 10 wt.% CaCl 2 bath.After extrusion, the core (a-inset) Indicates photo of extruded hydrogel using the fluid chip system, exhibiting helical fibers within a fiber.The 3D-printed microfluidic allows hydrogel in situ mixing and flow focusing to result in high-resolution core-shell fibers that mimic blood vessels h).Evolution of core and shell widths i) and their ratio j) with varying core flow rate and corresponding images indicating formation of straight, wavy and helical core layers d-g).k) Helical pitch distance was dependent on the ratio of shell to core. fluid is removed to give hollow fibers.By systematically varying the core and shell flow rates, a range of core architectures could be produced (Figure 5d-g) with varying fiber widths (Figure 5i).In particular the switch in architecture of the core from straight to wavy and to helical was found to relate to the fiber width ratio (W lumen /W fibre ) and flow rate ratio (Q core /Q shell ) (Figure 5j).The high-resolution 3D image of core-shell fiber (Figure 5h) confirmed that the helical core is in fact connected and hollow (see Supporting Video S1).In addition, both width ratio (Figure 5j) and pitch helical distance (Figure 5k) decreased as the flow rate ratio increased, resulting in formation of smaller lumen of hollow fibers.These results are consistent with those from a recent study by Jia et al. [10] who used a set of glass capillaries as microfluidic chips to produce 500-600 μm helical fibers with core widths of 130-480 μm.Our study delivered higher degree of control on the width of the core fibers generating core diameters as small as ≈50 μm which is in the scales of blood arterioles. [55] Process-Induced Cell Damage in Microfluidic Device The success of 3D bioprinting relies on preventing or at least minimizing mechanical damage on cells as they are extruded through narrow channels (Figure 6a).Such damage may be exacerbated within V-zigzag microfluidic devices due to the directional changes of the cells as they enter and exit the wide sections of the channels.To this end, cell viability was assessed after extruding cell-laden hydrogels through the various microfluidic devices at speeds employed in 3D bioprinting. Three V-zigzag microfluidics with channel widths of 100, 200, and 400 μm, each with a length of 20 mm were prepared (Figure 6b).Cell (SaOS2) viability was assessed using LIVE/DEAD assay (Figure 6c,d). [51]Viability comparable to the control (i.e., cast GelMA-non-extruded) was observed when cells were flowed through the various microfluidics at a flow rate of 10 ml h −1 (Figure 6e).The mean cell viability values after 3 h for the control, 100, 200, and 400 μm microfluidics were 94.9 ± 2.2%, 91.7 ± 4.7%, 91.6 ± 3.2%, and 89.2 ± 2.2%, respectively.Statistical analysis showed no significant (p > 0.05) difference within the groups.After 24 h, cell viability reduced to 90.2 ± 1.1%, 88.0 ± 3.4%, 86.7 ± 4.3%, and 86.9 ± 1.4%, for the control and channel widths of 100, 200, and 400 μm, respectively.The high cell viability observed 3 and 24 h after printing demonstrates the suitability of the V-zigzag design as a microfluidic printhead for 3D bioprinting.These results are consistent with those from a recent study by Han et al., [56] where higher cell viability values were reported for narrower channel widths.Han et al. [59] proposed shear stress is linearly proportional to nozzle diameter.Additional experimental studies are necessary to establish a fundamental understanding of the effect of nozzle diameter on cell survival. Next, the effect of flow rate (1 to 60 ml h −1 ) on cell viability was assessed using the 400 μm channel microfluidic (Figure 6f).Cell viability after 3 h reduced from 93.5 ± 1.0% at 1 ml h −1 to 75.1 ± 3.4% at 60 ml h −1 (a 19.6% reduction), which is nevertheless still higher than ISO 10 933 limit (70%).However, after 24 h, the cell viability for cells pumped through the microfluidic at 60 ml h −1 had decreased further and significantly (p < 0.05) to 62.8 ± 8.1%, which is below the threshold limit set by ISO 10 933.From these data, a threshold value of 40 ml h −1 was identified for the given nozzle width of 400 μm, beyond which, cell viability values are unacceptable.The trend observed for varying flow rates is consistent with previous studies, [54,59] highlighting the importance of printing parameters on cell viability. Conclusions This study demonstrates versatile microfluidics devices capable of delivering fibers with complex architectures and compositions by overcoming current challenges in efficient mixing of two fluids and resolution.Architectures such as solid fibers and core-shell hollow fibers with helical and wavy cores were fabricated.The devices are produced using an inexpensive manufacturing pipeline employing continuously Varied extrusion (CON-VEX) design approach to deliver novel variable-width zigzag (Vzigzag) passive mixer designs and flow-focusing components currently not possible by conventional approaches.The CON-VEX approach overcomes the limitations of conventional slicerbased approaches in terms of design freedom, resolution and surface finish thus microfluidic devices produced are comparable to lithographically produced devices yet giving novel benefits such as expense, wide-spread use and adaptability.Experimental and computational fluid dynamic assessment of fluid flow within the V-zigzag design showed rapid mixing of two fluids over a wide range of flow rates in contrast to conventional mixer designs (hexagonal, diamond, and zigzag), which exhibited diminished performance at higher flow rates.Complete mixing within the V-zigzag design was found to be due to the deceleration of fluids within the wider regions of the zigzag, allowing more time for fluids to diffuse without turbulence.The potential of the V-zigzag microfluidic as a 3D bioprinting printhead was demonstrated by high viability (>86%) of cells 3 and 24 h after extrusion, opening up new opportunities for application of MEAM-enabled microfluidic devices in regenerative medicine.The bottleneck for producing channels below 100 μm is the size of the nozzle in MEAM.Thereby, future studies should focus on fabricating structures with linewidths below 100 μm using CONVEX design approach to determine the limits of this method to push the state-of-the-art to achieve capillary level printing.Employing CONVEX to produce microfluidic devices with 3D configuration is also expected to enable new opportunities and should be explored. Experimental Section Materials: White ABS filament (Rasie3D Premium ABS) with 1.75 mm diameter was used to manufacture MEAM channels.Sylgard 184 and its curing agent (Dow Corning) was used as a matrix to embed the ABS channels.Acetone (analytical purity 99.6%) from Fisher Scientific was used for chemical treatment.Tygon tubing kit for microfluidics was supplied by Darwin Microfluidics.Additive Manufacturing Process: A Creality Ender 3 V2 MEAM system with a 400 μm nozzle diameter was used to extrude a continuous single layer of ABS filament.Custom GCode commands (series of commands controlling the MEAM printer) were generated using an open-source FullControl GCode designer software [26] with the set printing parameters (Table 2).Y-channels (60 mm long) were printed with the dimensions schematically illustrated in Figure S9 (Supporting Information).Four passive mixer designs were manufactured and named as follows: zigzag (for constant-width zigzag, Figure S9a, Supporting Information), V-zigzag (for variable-width zigzag, Figure S9b, Supporting Information), hex (for hexagonal mixer, Figure S9c, Supporting Information), and diamond (for diamond mixer, Figure S9d, Supporting Information).To ensure seamless structures for the hex and diamond mixers, the toolpath was defined by movement of the nozzle in continuous loops to fill in the structures.The toolpath for both zigzag and V-zigzag was designed according to y = A (sin x) where A (amplitude) = 1.5 mm and (wavelength) = 3.3 mm.These values for A and showed the best mixing index in a previous study by Khosravi Parsa et al. [57] Both zigzag and V-zigzag had the same toolpath, with the exception of the printing speed that was intentionally varied for the latter to enable microscale changes along the channel at designed areas as shown in Figure S9b (Supporting Information).The sides of the Y-channels were cut using a razor blade to create the inlets for the solution.To investigate the transferability of the technology to smaller cross-sections for microfluidic devices, a 100 μm nozzle diameter was used to manufacture the same designs but with channel widths of 100 and 200 μm.The printing speed was reduced by half (100 mm min −1 ) to ensure the nozzle did not block; the remaining printing parameters were kept the same as those tabulated in Table 2. Fabrication of MEAM-Enabled Microfluidic: The passive mixer regions of fabricated channels (Figure S10a, Supporting Information) were either used directly or exposed to a droplet of acetone (analytical purity 99.6%) using a micropipette at room temperature (RT: 20°C) for 10 s to reduce the surface roughness caused by nozzle movements [Figure S9b (i), Supporting Information].Preliminary studies (Figure S9, Supporting Information) carried out to select the optimum exposure time (from 5 to 60 s), identified 10 s as the best time to achieve a smooth and stable structure without losing structural integrity. [58]After chemical treatment, the channels (n = 3 per group) were exposed to compressed air at a pressure of 10 psi for 15 min at a vertical distance of 10 mm from the channels to remove residual acetone [Figure S10b (ii), Supporting Information] under the laboratory conditions (20 °C and 50% relative humidity).Sylgard 184 and its curing agent was prepared at 10:1 ratio according to the manufacturer's recommendations.The PDMS mixture was de-gassed in a vacuum chamber for 30 min to remove bubbles from the solution and then poured into a custom-made mould and cured for 10 min at 70 °C [Figure S10b (iii), Supporting Information].When the PDMS was semi-cured (i.e., was able to support the ABS channel on the surface), the treated MEAM channel was placed on top of the semi-cured PDMS layer and then covered by a new layer of PDMS.The resulting assembly was cured for 2 h at 70 °C until fully set [Figure S10b (iv), Supporting Information].The cured PDMS was flushed with acetone to dissolve the ABS channels [Figure S10c (i), Supporting Information].The final MEAM microfluidic had a thickness of 2 mm.To confirm print reliability, the average extruded filament width for 10 channels before and after acetone treatment was measured using a digital calliper.Finally, the mixing index of the different passive mixers was measured. Scanning Electron Microscopy: A Zeiss EVO MA10 scanning electron microscope (SEM) was used to obtain micrographs of the manufactured channels (n = 3) before and after chemical treatment. Surface Roughness of Channels: An Alicona G5 focus variation microscope (Bruker, Germany) was employed to capture the 3D scans of the Y-channels (n = 3) before and after acetone treatment to quantify surface roughness.In this technique, topographical information is provided by a combination of vertical scanning and focusing of the optical system at different depths (focus-variation technique). [59]Scans were acquired at specific locations where nozzle movements caused poor surface finish at a 20× magnification.Scans were post-processed using Mountains Premium 7.4 software (Digital surf, France) to create color-height mapping of the surface.The average surface roughness (R a ) from three replicates was calculated along and perpendicular to the fluid flow. Mixing Index Measurement: The effect of the geometry of the passive mixer on the mixing index of the microfluidic devices was investigated by pumping blue and yellow dyes into the two inlets of the microfluidic device using a dual syringe pump (IPS-14RS, Inovenso) from a 5 mL disposable plastic syringe (Figure S11a, Supporting Information).The blue and yellow solutions were prepared by diluting food-grade dye into deionized water (DI) at a ratio of 1:25 according to Mahmud et al. [45] The viscosity and the density of the solutions was assumed to be similar to DI.The typical flow rates for MEAM microfluidic devices are between 5 and 100 μl min −1 , [5,14,30] so a broad range from 1 to 1000 μl min −1 was used.The Reynolds number (Re) was quantified according to the method reported by Tsai et al. [12] and varied from 0.047 to 47. Stainless steel needles with 23 gauge were used to connect Tygon tubes (1/16″ outer diameter × 0.51 mm inner diameter) to the MEAM microfluidic.A Zeiss Primotech microscope at a 4× magnification was used to capture a series of images at distances of 0, 10, 15, 20, 25, and 40 mm from the junction along the Y-axis (see Figure 11b).To ensure direct comparison across all data, the microscope setting, and ambient light were kept constant throughout the experiment. The mixing index evaluation was based on the RGB values of the pixels in the region of interest (ROI) in the captured micrographs.The ROI was set as a 380 μm × 380 μm square inside the channel.The choice of selecting a smaller square instead of the full width of 400 μm avoided introducing experimental error by including the shadow on images from the channel walls.For each analysis, three ROIs were used to calculate the mean mixing index.The images were post-processed (Figure S11b, Supporting Information) to quantify the mixing index using a Python script (see Supporting Information) adapted from the MATLAB code by Mahmud et al. [45] The mixing index was quantified (n = 3) using equation 1 by decoding the respective RGB values for each pixel of the mixed and unmixed solutions. Mixing index where N mixed and N unmixed are the number of pixels classified as mixed and unmixed, respectively, using the following equations: and The mixing index ranged from 0.0 to 1.0, representing the worst and the best mixing performance, respectively. Computational Fluid Dynamics Analysis: The ANSYS software (version 2021) was used to calculate the laminar mixing of fluids along the zigzag and V-zigzag microfluidic devices in 2D with the same dimensions outlines in Figure 6a,b with no slip condition at the wall.The laminar flow was simulated based on the reduced conservation of mass equation ( 4): and the reduced Navier-Stokes as the conservation momentum (transient and inertial terms are negligible): [33] ∇p = ∇τ + ⃗ g (5) where g is gravity and τ is the stress tensor as described as: where , , μ, and I velocity vector, pressure, density of the fluid, viscosity, and unit tensor, respectively.Both liquids were assumed to be the same as water and incompressible with viscosity and density of 1 mPa.s and 998 kg m −3 , respectively.The concentrations of the top and bottom inlets were set as mass fraction of 1 and 0, respectively.Gravity was ignored in all cases except for the experimental comparison test (LSFM).ANSYS calculates the diffusive flux of each species following equation 7 below (Fick's law of diffusion): where ⃗ J is the diffusive flux, is density D m is the mass diffusion coefficient for each species and Y is the mass fraction of each species.The temperature gradient was ignored as both species were at the same temperature (300 K).The two species were the same as water that could interact with each other, and the mass diffusion constant used was 2.3×10 −9 [60] as the self-diffusion coefficient of water.For the meshing of the simulated geometries, an iterative increase in the number of nodes showed that 360 422 and 372 225 nodes were sufficient for zigzag and V-zigzag, respectively.The velocity streamlines for both microfluidic devices were examined for a range of flow rates (see Mixing Index Measurement).The mixing index (MI) was quantified using Equation ( 8): where is the standard deviation of the concentration of one selected species within a cross section and max is the standard deviation at the entrance of the mixing channel. Light Sheet Fluorescence Microscopy (LSFM) Imaging: A custom-made LSFM with single-sided illumination and detection was used to visualize the flow in the fluidic channel (Figure S7, Supporting Information).A single-mode, fiber-coupled laser, emitting at 442 nm (MDL-III-442, CNI), was first collimated and then shaped into a thin sheet of light via a cylindrical lens (fCL = 50 mm).The focal plane of the cylindrical lens was conjugated to the back focal plane of the ×10 water-immersion objective (Olympus UMPLFLN 10XW/0.3)through a 1× telescope (f1 = f2 = 50 mm).The created vertical light sheet with 2.6 μm waist was then matched to the focal plane of the detection objective (Olympus UMPLFLN 10XW/0.3),which was held orthogonally to the illumination axis by the imaging chamber.The collected image was then sent through an emission filter and a tube lens and was recorded by a sCMOS camera (Neo 5.5 sCMOS, Andor).The chip was held vertically in the imaging chamber, which was filled with water.The chip can be moved with a motorized translation stage (PI M-405.CG).The chip was coupled to a syringe pump (KDS-410-CE, KD Scientific) through a tube, pushing water mixed with a low concentration of 7 μm-diameter fluorescent beads (dilution factor of 1:1000, FP-7052-2, Spherotech) at 50 μl min −1 .To observe the beads flowing along a single plane, the exposure time of the camera was set to 5 ms, while the camera records at 100 Hz.The Fiji plugin Trackmate [7,8] was used for image analysis as described in [6] . Gelatin Methacryloyl and Sodium Alginate Hydrogel Preparation: Gelatin methacryloyl (GelMA) hydrogel was prepared at 10% (w/v) by dissolving lyophilized GelMA (Claro, PB Leiner, Belgium) in McCoy medium containing 0.5% (w/v) photoinitiator, lithium phenyl (2,4,6trimethylbenzoyl) phosphinate (LAP).Photo-curing of GelMA was achieved using a 405 nm blue lamp for 2 min held at a vertical distance of 15 cm.To demonstrate the on-fly mixing and co-axial extrusion capabilities of the novel microfluidic, 2 wt.% sodium alginate and 0.5 wt.% calcium chloride solutions were prepared and pumped through the Y-channels at a flow rate of 800 μl min −1 . Helical Microfiber Formation: To demonstrate the applicability of the newly developed microfluidic system, single-layer helical microfibers (n = 3) were formed by assembling two cylindrical channels using CONVEX design approach to enable co-axial extrusion.The inner channel (core layer) width was 0.4 mm, while the outer channel (shell layer) had the 1 mm width.The core fluid was 2 wt.% calcium chloride containing 30 wt.% Pluronic solution and the shell fluid was 2 wt.% sodium alginate solution.All fluids were pumped by the syringe pumps into a 10 wt.% calcium chloride bath.The collected fibers were assessed microscopically using Zeiss microscope. Cell Viability: Human osteosarcoma cell line, SaOS2 (ATCC, USA) cells were used to assess cell viability using LIVE/DEAD TM kit for mammalian cells (ThermoFisher).SaOS2 cells were cultured in T75 according to the protocol described in. [51]10% (w/v) GelMa hydrogel was mixed with SaOS2, and gently agitated to achieve a final concentration of 2 × 10 6 cells ml −1 .Bioink was extruded through V-zigzag at a constant flow rate of 10 ml h −1 with channel width varying from 100 to 400 μm, and the extruded material collected in a 12-well plate.GelMA bioinks were photocured as described in Gelatin Methacryloyl and Sodium Alginate Hydrogel Preparation.The GelMA bioink was also extruded through 400 μm channel width at a range of flow rates: 1, 5, 10, 20, 40, and 60 ml h 1 .The Cast GelMA and then cured GelMA (not extruded through fluidic channel) was used as the control group.Cell viability of all groups was measured after 3 and 24 h following the manufacturer's recommendation. [51]A Zeiss LSM 700 confocal microscope was used to take images.Cell viability was calculated (n = 3) as the ratio of live cells (stained green) to total cells using Fiji. [51]tatistical Analysis: The data obtained were expressed as means ± standard deviation.Statistical analyses were performed using Analysis ToolPak in Excel (2016) including one-way analysis of variance (ANOVA) and subsequent t-test at significant levels of p < 0.05. Figure 2 . Figure 2. Manufacturing workflow for fabrication of V-zigzag design using a) conventional slicing software and b) the CONVEX design approach.Parts manufactured with conventional slicer (a) showed defects and no variable-width deposition [ii], and voids [iii], whereas the CONVEX approach (b[ii-iii]) showed defect-free and variable-width extrusion.c) Actual mean extruded filament width of all designs with channels ranging from 100-400 μm against design widths showed <10% deviation (Mean values calculated from 10 replicates) compared to large variation (50%) for slicer-based parts.(d) Mean surface roughness (R a ) values measured for all four designs.The zigzag designs had significantly (* p < 0.05) lower surface roughness than the hex and diamond design.Error bars indicate standard deviation. Figure 3 . Figure 3. Schematic of straight channel a), V-zigzag b) and zigzag c) designs comparing the fluid mixing along the channel after 0 mm [i], 10 mm [ii] and 15 mm [iii] from the junction when blue and yellow fluids mixed together at a flow rate of 50 μl min −1 .(c) Evolution of mean mixing index for various passive mixer designs and straight channel versus flow rate and Reynolds numbers at a distance of 20 mm from the junction.The mixing index values d) for zigzag, diamond and hex were dependent on the Reynolds number where the lowest mixing performance was measured at Re of 3. The mixing performance of straight channel progressively reduced by increasing the Re.On the other hand, V-zigzag achieved complete mixing at all flow rates.e) Optical micrographs of V-zigzag indicate complete mixing after 15 mm for mixing 2 wt.% sodium alginate with 100 mm calcium chloride.f) same as (e) but when glycerol and water were used. Figure 4 . Figure 4. Velocity streamlines predicted by computational fluid dynamics at the flow rate of 50 μL min −1 for a) V-zigzag and b) zigzag and c) their corresponding velocity profile along the normalized distance.The V-zigzag design shows an approximately two-fold reduction in the velocity within the wider region than the constant-width channels.d) Light sheet fluorescence microscopy (LSFM) setup to analyze fluid flow through V-zigzag at 50 μl min −1 .e) Optical micrograph shows the wider region used to image the fluid flow.f) Maximum intensity projections with temporal color-coding of the V-zigzag while flowing beads with water.Direction of the flow is highlighted by the white arrows.g) Image obtained while extruding water containing beads.h) The Laplacian of Gaussian (LoG) spot detection algorithm was applied to detect the beads.Beads were then tracked using a Linear Assignment Problem (LAP) mathematical formulation, and each track were color-coded based on the velocity of the bead. Figure 5 . Figure 5. 3D-printed microfluidic chip concept a) and b) micromixers and c) hydrodynamic flow focusing components.(a-inset)Indicates photo of extruded hydrogel using the fluid chip system, exhibiting helical fibers within a fiber.The 3D-printed microfluidic allows hydrogel in situ mixing and flow focusing to result in high-resolution core-shell fibers that mimic blood vessels h).Evolution of core and shell widths i) and their ratio j) with varying core flow rate and corresponding images indicating formation of straight, wavy and helical core layers d-g).k) Helical pitch distance was dependent on the ratio of shell to core. Figure 6 . Figure 6.a) Schematic diagram indicates the impact of shear force and velocity during 3D bioprinting which directly influence cell viability.b) V-zigzag microfluidic devices with channel widths of 100, 200, and 400 μm were prepared to deliver 10 wt.% GelMA containing SaOS2 cells.c) Confocal images from cell viability LIVE/DEAD assay for GelMA at various channel widths after 3 and 24 h of extrusion.d) Same as (c) but for varying flow rates.Live and dead cells are labelled green and red, respectively; scale bar is 50 μm for all confocal images.Average viability quantified as percentage of live cells over all cells is plotted for the hydrogels extruded at various e) channel widths and f) flow rates, respectively, and compared against the threshold value by ISO 10 933.Error bars are standard deviations.(* p < 0.05). Table 2 . Printing parameters used to produce ABS specimens with Creality Ender system.
10,585.4
2023-04-25T00:00:00.000
[ "Engineering" ]
Comprehensive Characterization of 10,571 Mouse Large Intergenic Noncoding RNAs from Whole Transcriptome Sequencing Large intergenic noncoding RNAs (lincRNAs) have been recognized in recent years to constitute a significant portion of the mammalian transcriptome, yet their biological functions remain largely elusive. This is partly due to an incomplete annotation of tissue-specific lincRNAs in essential model organisms, particularly in mice, which has hindered the genetic annotation and functional characterization of these novel transcripts. In this report, we performed ab initio assembly of 1.9 billion tissue-specific RNA-sequencing reads across six tissue types, and identified 3,965 novel expressed lincRNAs in mice. Combining these with 6,606 documented lincRNAs, we established a comprehensive catalog of 10,571 transcribed lincRNAs. We then systemically analyzed all mouse lincRNAs to reveal that some of them are evolutionally conserved and that they exhibit striking tissue-specific expression patterns. We also discovered that mouse lincRNAs carry unique genomic signatures, and that their expression level is correlated with that of neighboring protein-coding transcripts. Finally, we predicted that a large portion of tissue-specific lincRNAs are functionally associated with essential biological processes including the cell cycle and cell development, and that they could play a key role in regulating tissue development and functionality. Our analyses provide a framework for continued discovery and annotation of tissue-specific lincRNAs in model organisms, and our transcribed mouse lincRNA catalog will serve as a roadmap for functional analyses of lincRNAs in genetic mouse models. Introduction Noncoding RNAs (ncRNAs) are transcripts that do not encode proteins or peptides, yet which play a variety of structural or regulatory roles in biological processes. Several major classes of ncRNAs, including ribosomal RNAs, small nucleolus RNAs and microRNAs (miRNA), have been extensively characterized and their functions have been well established [1]. For example, miRNAs have been recognized as key regulators through which cells fine-tune their proteomes and they have been implicated in nearly every important signaling and metabolic pathways. Altered miRNA profiles are linked to a number of pathological conditions, while multiple miRNAs are currently being evaluated as potential therapeutic agents for disease [1,2]. In recent years, significant advances in sequencing technology have expanded the RNA world even further [3]. A group of noncoding RNAs, large intergenic noncoding RNAs (lincRNAs), have emerged as a major uncharacterized territory of the mammalian transcriptome [4,5]. These transcripts are larger than 200 bases and they are transcribed from intergenic regions. A few ubiquitous features of lincRNAs have been uncovered in efforts devoted to cataloguing and annotating lincRNAs in human genome, and a limited number of lincRNAs have been studied in depth in order to identify their functions [6,7,8,9]. However, since these novel transcripts comprise over half of the transcriptional units (TUs) in mammalian genomes [10] and their expressions are often dynamically regulated, the current annotations of lincRNAs are far from complete, thus limiting the extent of bioinformatics analyses that can be performed, and hindering the establishment of a unified model of their regulation and mechanisms of action. For example, several computational methods have been developed to reconstruct lincRNA transcriptome [11,12] yet most of them have only been applied to limited number of species and often only to humans. In light of the reported lower evolutionary conservation of lincRNAs, the efficiency of these methods must be validated in other species to create a universal approach for lincRNA assembly, which could significantly accelerate lincRNA discovery while at the same time allowing in-depth comparative analyses of noncoding transcripts between species. For example, though mice represent the most widely utilized model organism for genetic elucidation of genes implicated in human pathologies, a comprehensive catalog of tissue-specific lincRNAs in mice is still lacking, and an efficient lincRNA assembly pipeline has yet to be established. In this study, we carried out ab initio assembly of mouse lincRNA transcriptome across multiple tissues and we identified 3,965 novel lincRNA genes that have no overlap with currently known coding and noncoding transcripts. In combination with all know lincRNAs, we established an inclusive catalog of mouse lincRNAs. We also systemically analyzed all lincRNAs in our collection to map their key global features and to analyze their evolutionary conservation. Finally, we used a 'two-color' co-expression network method to assign functionalities to lincRNA groups and to determine how their potential expression correlates with that of neighboring coding genes. Since nearly one third of diseaseassociated SNPs are located in noncoding regions, our work not only establishes a roadmap for genetic analysis of lincRNAs in mice but it also provides a unique tool for scientists who perform disease modeling in this important model organism. Transcriptome reconstruction of the mouse tissues The RNA-seq data used in this study were downloaded from the Wellcome Trust Sanger Institute. To prepare the sequencing data, RNAs were extracted from six biological replicates of six different mouse tissues including heart, hippocampus, liver, lung, spleen, and thymus and they were sequenced on an Illumina Solexa platform [13]. These reads were paired and both lengths were 76 nt. Starting from a total of 1.9 billion reads, we performed shortread gapped alignment using Tophat [14] and recovered 1.4 billion (75%) mapped reads (see more details in Table S1). We then used ab initio assemble software Cufflinks [12] and Scripture [11] to reconstruct the transcriptome for each tissue based on the read-mapping results. Transcripts reconstructed by these two assemblers were separately merged into combined sets of transcripts using the Cuffcompare utility provided by Cufflinks. After filtering for the exon number, transcript length and coverage, we obtained nearly 2,400,000 reliably expressed multiexon transcripts longer than 200 nt for each sample. We compared these transcripts to major genomic database (Table S2) and classified the combined transcripts into several different subsets; the majority of the transcripts (97.8%) correspond to annotated protein-coding genes and a small portion of the transcripts are known noncoding genes (0.6%) and pseudogenes (0.3%). We also found that 1.3% of the transcripts had no overlap with annotated transcripts and were designated as unannotated ( Figure 1). To assess the robustness of these ab initio assemblers, we analyzed their performance on protein-coding and well-characterized noncoding genes. The annotated transcripts we reconstructed using Cufflinks cover 71% of RefSeq coding genes [15] fully or partially, and Scripture could assemble 68% of all RefSeq coding transcripts. In combination, the two assemblers fully or partially reconstructed 72% of Refseq coding genes, which similar to previous reported efficacies of these tools [16]. To evaluate the assemblers' performance on noncoding RNAs, we compared the ,14,000 know noncoding transcripts to a comprehensive lincRNA database. Because none of currently available databases has a collection of all known noncoding RNAs, we built an inclusive database called NONCODE [17] by combining annotated mouse noncoding transcripts from RefSeq [15], UCSC [18] and Ensembl [19] as well as mouse lincRNAs reconstructed by Guttman et al. [11]. In all, there were 1,197 annotated ncRNAs in RefSeq, UCSC and Ensembl databases that could be fully or partially reconstructed, corresponding to 9,630 of our mouse tissue transcripts, and 1,577 transcripts in our datasets matched 251 mouse lincRNAs in Guttman's novel lincRNAs dataset. Furthermore, we evaluated the performance of these ab initio assemblers on Fantom noncoding genes. Considering that there is a high percentage (,30%) of single exon transcripts in the Fantom v3 database [10], and that our combined sets of multi-exon transcripts have been filtered by exon numbers and transcript length, we only used the original unfiltered transcripts reconstructed by Cufflinks and Scripture to perform the assessment. Comparing Fantom noncoding genes with our unfiltered transcripts revealed that 10,674 of Fantom transcripts could be fully or partially reconstructed. These results strongly support that these assembly approaches could robustly and reliably reconstruct both coding and noncoding transcriptomes at a global level. Identification of novel mouse lincRNAs Based on the robust transcript reconstruction and broad availability of deep sequencing datasets, we have developed a novel lincRNAs detection pipeline system to identify novel lincRNAs that exhibit tissue-specific expression in mice (Materials and Methods, Figure 2A). We first analyzed the coding potential of unannotated transcripts using CPC [20] and CNCI in-house software filtering out 30% of all transcripts. Next, we focused only on intergenic transcripts that yielded 3,965 novel mouse lincRNA loci (6,764 transcripts) (Table S3). These transcripts had an average mature spliced size of 1.5 kb. Each transcript on average contained 2.5 exons of 620 nt long. In the novel lincRNAs dataset, about 48% were reconstructed by Cufflinks, 31% by Scripture, and 21% by both. These ratios are clearly lower than those of protein-coding genes for both programs, with which about 61% of genes can be reconstructed. This discrepancy might be caused by the different algorithms implemented by each assembler to reconstruct low-abundance transcripts, and similar observations have been reported in previous attempts to assemble lowexpression transcripts using these programs [16,21]. Since there were six biological replicates for each mouse tissue, we checked the recurrence of each individual novel lincRNA in our reconstruction to enhance our analyses. If a lincRNA transcript could be fully or partially reconstructed by Cufflinks or Scripture in one biological replicate of any tissue, we counted this as a recurrence. This recurrence test showed that about 40% of the 6,764 mouse novel transcripts could be reconstructed in all six biological replicates from at least one tissue, 20% of them in five biological replicates and only ,7% transcripts recurred just once ( Figure 2B). These results demonstrated that data generated by the two ab initio assemblers are highly reproducible, thus reducing the number of replicates has little effective to obtain reliable results. Characterization of tissue specific lincRNAs In combination with all know lincRNAs, we established a comprehensive catalog of 10,571 transcribed lincRNA genes (15,061 transcripts shown in Dataset S1). Based on the FPKM (Fragments Per Kilobase of transcript per Million mapped reads) of each transcript calculated by Cufflinks' ''abundance estimation mode'' across the six tissues, we compared the expression differences between lincRNAs and RefSeq protein-coding genes. The average expression level of lincRNAs was lower than proteincoding genes but lincRNAs also showed a wider range of abundance, with a subset of them equally abundant to mRNAs ( Figure 3A). This pattern is consistent with previous studies [11,22]. We then calculated a tissue specificity score for each transcript using an entropy-based metric that relies on Jensen-Shannon (JS) divergence [16]. To assess the tissue specificity of mouse lincRNA expression, we calculated the Jensen-Shannon tissue specificity score (JS score) [16] for each transcript using the established procedure. Our analysis showed that distributions of JS scores for lincRNA and protein-coding genes are significantly different (P value of Kolmogorov-Smirnov test ,10 210 , Figure S1). Using JS score = 0.5 as a cutoff, we demonstrated that the majority of lincRNAs (49%) are tissue-specific, relative to only 23% of protein-coding genes ( Figure 3B-D and Datasets S2 and S3). Thus, mouse lincRNA expressions are clearly subject to tissuedependent regulation either at the level of transcription or degradation. Genes actively transcribed by RNA polymerase II often display trimethylation of lysine 4 (H3K4me3) or methylation of lysine 4 (H3K4me1) on histone H3 surrounding their promoter regions, and these active histone marks have been utilized to uncover lincRNAs from genomic regions that harbor no protein-coding genes. We investigated the chromatin states of lincRNAs in heart, liver, thymus and spleen to reveal that tissue-specific lincRNAs have highly enriched active histone marks surrounding their transcriptional start sites (TSS) compared to the rest of lincRNA pool ( Figure 4). Therefore at least some of the tissue specificities of lincRNAs can be explained by enhanced transcription, and tissuedependent histone modifications in the promoter may be used to predict the expression profiles of lincRNAs across different tissues. Conservation analyses of mouse lincRNAs The evolutionary origin of a transcript often provides critical insight into its function. To assess the evolutionary conservation of lincRNA transcripts, we surveyed a catalog of mammalian and non-mammalian vertebrate transcripts that were syntenically mapped to the mouse genome. We found that 76% (11,479) of mouse lincRNAs have orthologous regions in the human genome (Dataset S4). Subsequently, using the TransMap tool to perform syntenic BLAST-Z alignments, we mapped all mouse lincRNAs to known transcripts across the vertebrate lineage. This analysis identified 1,477 lincRNAs syntenically paired with an orthologous transcript from TransMap (Dataset S4, Figure S2), accounting for 10% of all mouse lincRNAs. Trans-mapped lincRNAs also exhibit stronger tissue specificity and lower expression level relative to other lincRNAs. This moderate homology suggests that lincRNAs might be less conserved than their protein-coding counterparts although a quantitative assessment will require thorough analyses of datasets with higher sequencing depth across multiple species. Functional predication and neighborhood correlation of mouse lincRNAs based on the co-expression network The comprehensive lincRNA catalog we constructed allows us to perform in-depth bioinformatics characterization of these novel transcripts. Here, we built a 'two-color' co-expression network to infer the putative lincRNA functions, using a method based on one we previously reported [23,24]. In brief, FPKMs of lincRNAs and protein-coding genes were calculated across six tissues by the Cufflinks quantification module at individual gene level. To determine functional characteristics of lincRNAs, all FPKMs were further analyzed by a co-expressed module sub-networks method (Markov cluster algorithm, MCL) [23]. MCL is an efficient and powerful algorithm which identifies modules based on the simulation of random walks in a network. With default parameters (inflation value = 1.8), the MCL algorithm found 51 functional enrichment modules with six or more genes, 32 of which consisted of both coding and lincRNA genes. Since each of these modules was significantly enriched for at least one GO BP term or KEGG pathway, we were able to functionally annotate 878 mouse lincRNAs based on the enriched GO associated with their modules (Datasets S5, S6). Our results indicated that a large portion of tissue-specific lincRNAs are potentially associated with critical developmental and metabolic processes including the cell cycle and cell development, and that they might be essential in maintaining each tissue's identity and functionality. Furthermore, recent studies suggest that some lincRNAs may act in cis and regulate gene expressions within their chromosomal neighborhood [6,25], although trans actions of lincRNAs in embryonic stem cells have also been clearly documented [26]. Our comprehensive catalog of mouse lincRNAs presents a unique opportunity to further explore this possibility. One expectation of the cis hypothesis is that the expression of lincRNAs and their neighboring genes would be correlated across our samples. Consistent with previous studies [6,27,28], lincRNA with protein-coding gene neighbors exhibits stronger positive correlations than neighboring coding genes with coding gene neighbors ( Figure S3). To further determine whether lincRNA and protein-coding gene neighbors are co-regulated in the same functional context as strictly coding gene neighbors, we focused on the 878 functional annotated lincRNAs and their coding neighbors as described above, calculating their expression correlation coefficients and comparing GO terms associated with each. These results showed that 44% (388/878) of neighboring lincRNAcoding gene pairs have a correlation coefficient of 0.8, and 42% (164/388) of these are involved in the same biological process significantly (P value,10 210 ) more than expected by chance (among 10,000 randomly chosen gene pairs only 21% gene pairs shared the same GO annotations) ( Figure S4, Dataset S7). These results suggest that a portion of lincRNAs might act locally to regulate their neighboring genes in cis. Vigorous bioinformatics analyses of large datasets as well as experimental testing will be required before this mechanism can be generalized to the majority of lincRNAs. Discussion In this report, we presented the first comprehensive annotation of mouse lincRNAs based on whole transcriptome sequencing of multiple tissues and we provided in-depth analyses of these novel transcripts that lay the groundwork for further characterization of their pathophysiolocal consequences. We first reconstructed tissuespecific mouse transcriptomes from deep sequencing data to reveal a significant number of novel lincRNAs. The effectiveness of this approach is supported by the successful assembly of known protein-coding genes and lincRNAs and by the confirmed recurrence of novel lincRNAs in the majority of the six biological replicates for each tissue. We then calculated a tissue specificity score based on the FPKM for each transcript and demonstrated that mouse lincRNAs are expressed in a much more tissue-specific manner than protein-coding genes. The tissue specificity of lincRNAs is also reflected in the histone marks surrounding their transcriptional start sites (TSS), suggesting that lincRNAs share similar transcriptional signatures with protein-coding genes. Furthermore, we analyzed the conservation of lincRNAs across vertebrate species and revealed that lincRNAs have been under weaker selective constraints than protein-coding genes across mammalian and vertebrate ancestral genomes, which is consistent with previous reports based on other lncRNA catalogs [4,27,29]. Finally, utilizing a module based algorithm, we were able to predict putative functions for at least 878 lincRNAs, and we presented evidence supporting the hypothesis that lincRNAs might act in cis to affect expression in their chromosomal neighborhood. Our work significantly complements the recent ENCODE publications [30]. The ENCODE project, which is an international effort to identify all regions of transcription, transcription factor association, chromatin structure, and histone modification in the human genome, has recently published 30 papers including a few that extensively characterize lincRNAs [27,31,32,33]. However, ENCODE papers focus mainly on human samples which carry high degrees of genetic diversity and which often have very limited ''true'' biological replicates. On the other hand, the sequencing data used in our study were produced from mice of identical breeding with little genetic variance, as documented by the similarity of the six biological replicates provided. In addition, the strain of the two founder mice used in this study has been widely used to model human metabolic diseases, particularly obesity, diabetes, and cardiovascular disorders. A complete annotation of lincRNAs in this strain allows a unique opportunity for comparative analyses between humans and mice and it also provides an informatics resource to further validate the relevance of genomic variance in disease pathogenesis which is sought by the ENCODE project. Our work also provides a framework for identifying and characterizing lincRNAs in other model organisms ( Figure S5). Detailed annotations of genomes and transcriptomes of model organisms have proved to be instrumental in advancing almost all research areas of biology and the elucidation of lincRNA expression in model organisms will likely generate exciting new insights into how they function. Our lincRNA discovery pipeline can be easily adapted to study other model organisms and could help to annotate lincRNAs in these essential research subjects. Most importantly, our work establishes a roadmap for scientists to study the physiological function of lincRNAs and to eventually pinpoint their pathological role in human disease. A number of human SNPs associated with disease have been mapped to lincRNA loci [34,35] yet their causal relations with these pathological conditions have not been established. For mutations in coding genes, generating and characterizing a genetic mouse model is often the first step in establishing causality, but no mouse models with targeted knockouts of disease-associated lincRNAs have been widely adopted for study partly due to the incomplete annotation of mouse lincRNAs. Our work could fill this critical gap and is, in fact, already in practice in our current collaboration that aims to dissect the function of lincRNAs in physiology and disease in experimental mice. RNA-seq data set All RNA-seq data of mouse tissues used in this study were obtained from the Mouse Genomes Project at the Wellcome Trust Sanger Institute and can be directly downloaded from their website (accession number: ERP000591). Polyadenylated RNAseq data utilized in this study were generated from six biological replicates of six mouse tissues including heart, hippocampus, liver, lung, spleen and thymus (the mouse strain used is a cross of C57BL/6J and DBA/2J). Each tissue yielded 54 million reads per sample on average, and the reads were paired and both lengths were 76 bp. RNA-seq reads mapping We used the spliced read aligner TopHat version V1.31 to map all sequencing reads to the mouse genome (mm9) [14]. Two rounds of TopHat mapping were used to maximize the splice junction information derived from all tissues. In the first round, all reads were mapped with TopHat using the following parameters: min-anchor = 5, min-isoform-fraction = 0, and the rest set as default; in the second round of TopHat mapping, all splice junctions produced by the initial mapping were collected and fed into TopHat to re-map each sample with the following parameters: raw-juncs, no-novel-juncs, min-anchor = 5 and min-isoformfration = 0. Biological replicates of mapped reads from the same tissue were merged into a single BAM file to facilitate the transcript assembly and quantification. Transcriptome assembly Aligned reads from TopHat were assembled into transcriptome for each tissue separately by both Scripture [11] or Cufflinks [12]. Both assemblers use spliced read information to determine exon connectivity, but with different approaches. Cufflinks uses a probabilistic model to simultaneously assemble and quantify the expression level of a minimal set of isoforms and provides a maximum likelihood explanation of the expression data in a given locus. Scripture uses a statistical segmentation model to distinguish expressed loci from experimental noise and uses spliced reads to assemble expressed segments. It reports all statistically significantly expressed isoforms in a given locus. The two approaches might generate different results in terms of assembled transcripts and numbers of products. Cufflinks version V1.0.3 was run with default parameters (and 'min-frags-per-transfrag = 0') and Scripture version 1.0 was run with default parameters besides the omission of paired-end information to avoid conflicts that could occur while running Cufflinks abundance estimation mode in later steps. Novel lincRNAs detection pipeline As expected from a mouse tissue cohort, individual transcript assembly may have noise from multiple sources such as artifacts generated by the sequence alignment, unspliced intronic pre-mRNA or genomic DNA contamination. To enhance the reliability of constructing expressed lincRNAs from mouse tissues, we developed an analysis pipeline to minimize noise and maximize recovery of ''true hits'' by implementing the following five steps: (1) Recalculate FPKM (fragments per kilobase of exons per million fragments mapped) and read coverage of each transcript across the six tissues separating transcripts as reliably expressed instead of background noise on the basis of FPKM using a trained decision tree; (2) Compare the combined transcripts with mouse coding genes with well-established databases such as Refseq [15], UCSC [18], Ensembl [19], Vega [36] for coding genes, and NONCODE for noncoding genes [17] and an independent Pseudogenes database [37] to eliminate transcripts that have at least one exon overlapping with any of them; (3) Calculate the coding potential of each transcript using CPC (coding potential calculator) [20] and CNCI (Coding Noncoding Index) in-house software to recover the transcripts which can be categorized as noncoding (CNCI, is a powerful signature tool that profiles adjoining nucleotide triplets to effectively distinguish protein-coding and non-coding sequences independent of known annotations; CNCI software is available at http://www.bioinfo.org/software/cnci); (4) Select transcripts that have more than one exon and which are longer than 200 bases; (5) Select the remaining transcripts that are located in the intergenic regions, at least 1 kb from any known protein-coding gene. Tissue specificity score and histone modification data To evaluate the tissue specificity of a transcript, we devised an entropy-based method to quantify the similarity between a transcript's expression pattern and another predefined pattern that represents an extreme case in which a transcript is expressed in only one tissue [38]. All histone modification data were from mouse ENCODE data and were downloaded from UCSC Browser (http://hgdownload.cse.ucsc.edu/goldenPath/mm9/ encodeDCC/wgEncodeLicrHistone/). Conservation analyses of mouse lincRNAs We used the liftOver (http://genome.ucsc.edu/cgi-bin/ hgLiftOver) tool to identify the orthologous locations of mouse lincRNAs in the human genome and used TransMap tools, which implements syntenic BLAST-Z alignments, to map all mouse lincRNAs to known transcripts across vertebrate linage. Figure S1 The distribution of JS score between lincR-NAs (black line) and protein-coding genes (red line).
5,499.2
2013-08-12T00:00:00.000
[ "Biology" ]
Methylmercury determination in freshwater biota and sediments: Static headspace GC-MS compared to direct mercury analyzer We developed and compared two analytical methods for determination of MeHg in freshwater biota and sediments, by: I) simplified static headspace GC-MS using internal standard (IS) isotope dilution quantification, after microwave acid digestion and aqueous phase NaBEt4 ethylation; II) Automated Mercury Analyzer, after double toluene extraction followed by back-extraction with L-cystein. The performance was evaluated by analysis of certified reference materials. For biota, mean recovery was 100 ± 2% and relative standard deviation (RSD) ≤ 6.8% for method I, and mean recovery was 98 ± 7% and RSD ≤13% for method II. For sediments, recovery of 94.5% and RSD of 8.8% were obtained with method I, and recovery of 90.3% and RSD of 9.4% with method II. Limits of detection (LOD) were 0.7 µg kg−1 and 6 µg kg−1, respectively. Both techniques were tested for MeHg analysis in freshwater invertebrates, fish and sediments, covering a large range of MeHg values (1.9–670 µg kg−1 d.w.). • Both protocols proved to be suitable for MeHg analysis in complex environmental matrices, even if, for method II, interferences in the extraction phase and limited sensitivity may hinder sediment analysis. • Passing-Bablock regression revealed a slight disproportion between methods, with line slope = 1.058 (95% CI ranging from 1.001 to 1.090). Specifications table Subject Area Environmental science More specific subject area Methylmercury analysis Method name Methylmercury in freshwater biota and sediments by GC-MS or automatic mercury analyzer Name and reference of original methods Determination of MeHg in biological tissues with GC-MS using isotope dilution: [1] Determination of MeHg in biological tissues and sediments with direct mercury analysis: [ 2 , 3 ] Resource availability A calculation sheet is provided to support quantification Background We developed and compared two methods for the analysis of methylmercury (MeHg) in freshwater biota and sediment samples, using two different analytical systems: I) static headspace and gas chromatography with single quadrupole mass spectrometry detection, and II) Automated Mercury Analyzer after double liquid-liquid extraction. GC-MS method was derived by the original method by Cavalheiro et al. [1] by applying some procedural modifications. Preliminary tests were carried out using tetramethylammonium hydroxide (TMAH) as an extractant, but it was discarded mainly for the formation of emulsions. Thus, HCl for sample digestion was preferred, also because in the literature it is considered as more suitable for sediments [ 4 , 5 ]. The derivatization with sodium tetraethylborate (NaBEt 4 ) before the static headspace was performed directly in aqueous solution instead of in the solvent ( n -hexane) after liquid-liquid extraction to limit the extraction of matrix interferences, and thus to decrease the LOD. To optimize the aqueous ethylation, pH was restored to 5.5 [6] before the addition of the derivatizing agent. For what concerns quantification, isotope dilution was used because it was shown to improve repeatability of results then other quantification methods [7] . For method II, we used the protocol by Calderón et al. [3] . This method aims at the determination of the sum of all organic forms of mercury, after solvent extraction. As regards biota, MeHg is largely the dominant form of organomercury, thus the results can be considered as a measure of MeHg, with good approximation [ 8 , 9 ]. As regards sediments, other organic forms of Hg, such as EthHg and PhHg, may be present, generally deriving from anthropogenic sources [8] , even if with low concentrations in comparison with MeHg [10] . Thus, the application of this technique to sediments may account for the sum of all organomercury compounds. However, previous studies referred to this technique as analysis of MeHg in sediments [2] . This technique has been used for analysis of marine sediments and biota [ 2 , 11-13 ], but to our knowledge, testing on freshwater sediments and organisms is not reported in the literature. The direct mercury analyzer is used here also for determination of total mercury (THg) in samples according to US-EPA [14] . Sample collection and pretreatment Both analytical methods were assessed using the following certified reference materials: SRM-2974a Mytilus edulis tissue (National Institute of Standards and Technologies -NIST, USA), SRM-1946 Lake Superior fish tissue (NIST, USA), BCR-CRM414 plankton powder (Institute for Reference Materials and Measurements -IRMM, European Commission, Belgium) and ERM-CC580 estuarine sediment (IRMM, European Commission). The performance of both methods was tested by analyzing samples of biota and sediments collected in Lake Maggiore basin (Northern Italy). The ecological quality of the lake and its main tributaries is constantly monitored by the International Commission for the Protection of the Italian-Swiss Waters (CIPAIS) ( www.cipais.org ) [ 15 , 16 ]. Fish and sediments collected in the western part of the lake (Pallanza Basin) and benthic invertebrates (insects, crustaceans) and sediments collected in the Toce River, a tributary of Lake Maggiore, flowing into the Pallanza Basin, were analyzed. Both ecosystems are characterized by a residual mercury contamination due to past activity of a mercury cell chlor-alkali plant located along the Toce River [17][18][19] . Moreover, fish collected in another tributary of Lake Maggiore, the Boesio River, which flows through a heavily anthropized watershed [ 15 , 16 ], were also analyzed. For invertebrates, whole bodies were analyzed. For fish, the caudal fillet was separated and analyzed. All samples were freeze-dried before analysis. Biological tissues were homogenized with a ball mill. Sediments were sieved and the fraction < 63 μm mesh-size was analyzed. Method I: GC-MS GC-MS analysis was carried out using Thermo Fisher GC-MS system (respectively Focus GC and DSQ TM II single quadrupole) equipped with TriPlus RSH TM autosampler (Thermo Fisher Scientific, Rodano, Milan, Italy) able to static headspace technique. Instrumental conditions are given in Table 1 . where MeHg corr and Me 201 Hg corr are the areas corrected by cross contribution (native and internal standard, respectively) whereas MeHg obs and Me 201 Hg obs are the observed areas in the acquired chromatogram. Thus, final areas used for calibration and quantification were calculated as follows: M eH g corr = M eH g obs − xM e 201 H g obs A proper IS amount was added to each sample considering the predicted natural MeHg concentration in the sample, basing on the literature (i.e., percent MeHg on THg according to e.g. [ 11 , 20-22 ]), to avoid a MeHg/IS signal ratio too large or too small (Spreadsheet 1). All materials needed for the procedure (vessels, centrifuge tubes, etc.) were soaked into a clean diluted HNO 3 10% (v/v) bath for 24 h, then they were rinsed with ultrapure water and dried into a clean drying oven. A stock standard solution was prepared by dissolving methylmercury chloride salt in a solution made with methanol and hydrochloric acid 18% with a proportion of 30/70% v/v. An internal standard solution was prepared by diluting 1 mL of a Hg-enriched monomethylmercury solution ( 201 Hg, 5.49 ± 0.04 μg g −1 , 96.5 %, ISC Science, Oviedo, Asturias, Spain) in 99 mL of ultrapure water 1% HCl. NaBEt 4 1% was obtained by dissolving 100 mg of sodium tetraethylborate 97% in 10 mL of ultrapure water. Calibration standard solutions was prepared in the range 0.03-2 μg MeHg and a six-point calibration curve was used for quantification. Procedure For analysis, 0.25-0.5 g d.w. of biological samples and 0.5-1 g d.w. of sediment samples were used. Samples were microwave digested at 70 °C for 3 min after adding the IS 201 Hg solution and 3 mL of 1 M HCl. Digested samples were then centrifugated (10 min at 50 0 0 rpm) or filtered (CA 0.4 μm), transferred to the autosampler vials, and mixed with 10 mL of buffer solution (sodium acetate/acetic acid 1 M) and 3 mL of 1 M KOH to reach a pH value of 5.5. For ethylation, 1 mL of NaBEt 4 1% was added immediately before vial crimping. The samples were then incubated for 12 min at 90 °C, and 1.2 mL of the headspace was injected in the GC-MS. The procedure is summarized in Fig. 1 . Spreadsheet 1 can be used for quantification. Method II direct mercury analyzer The method involves a double liquid-liquid extraction, first with toluene and then with a cysteine solution, and is based on the affinity of MeHg for the thiol group of cysteine. Detection was carried out using the automatic Hg analyzer AMA-254 (FKV srl, Bergamo, Italy). The instrument detection limit (LOD) is 0.01 ng Hg, the working range is 0.05 to 600 ng Hg. Reagents, standards and calibration Sodium acetate anhydrous was achieved from BDH (Darmstadt, Germany), L-cysteine hydrochloride monohydrate, potassium dichromate and hydrochloric acid from Sigma-Aldrich (Darmstadt, Germany), sodium sulfate and toluene from Fluka (Darmstadt, Germany), hydrobromic acid 48% from Panreac (Castellar del Vallès, Spain). All materials needed for the procedure were previously soaked into a clean diluted HNO 3 10% (v/v) bath for 24 h, then rinsed with ultrapure water and dried into a clean drying oven. L-cysteine solution (1%) was obtained by mixing 12.5 g of sodium acetate anhydrous, 1 g of Lcysteine hydrochloride monohydrate and 0.8 g of sodium sulfate in 100 mL of ultrapure water. A stock solution of 10 mg Hg L −1 was prepared by dilution of a standard solution of mercury (10 0 0 mg Hg L −1 , Scharlab S.L., Barcelona, Spain) with potassium dichromate 1 % and hydrochloric acid 1:1 v/v. From this stock solution a 500 μg Hg L −1 solution was prepared by dilution. Both solutions were stored in the dark at 4 °C. Calibration curve of the method was obtained between 1 and 75 μg Hg L −1 using calibration solutions prepared by dilution of the 500 μg Hg L −1 stock solution with the L-cysteine solution (1%). The concentration values obtained after analysis of the sample extract (expressed as μg Hg L −1 ) were converted into concentration in the initial solid sample (expressed as μg MeHg kg −1 ) using the following formula: where MeHg is the concentration of methylmercury in the sample (μg MeHg kg −1 ), C is the concentration of mercury into the extract (μg Hg L −1 ), 6 is the volume of L-cysteine solution (1%) (mL), w is the weight of the sample (g), f is the ratio between molecular weight of MeHg and Hg (1.075), in order to convert the value expressed as Hg to MeHg (Spreadsheet 1). Procedure MeHg analysis was carried out following the protocol by Calderón et al. [3] ( Fig. 2 ). Ten mL of hydrobromic acid was added to 0.15-0.2 g d.w. of the biological or sediment sample in a vial and manually shaken. Then 20 mL of toluene were added. The solution was mixed with a Vortex for 2 min and centrifuged for 10 min at 50 0 0 rpm. Approximately 15 mL of toluene phase (i.e., the supernatant) were recovered and transferred into a second vial, where 6 mL of L-cysteine solution were added. An additional volume of 15 mL of toluene was added to the remaining sample in the first vial and the solution was shaken and centrifuged as described above. The toluene phase was recovered and added into the second vial. Finally, the second vial (i.e., the toluene phase in L-cysteine solution) was centrifuged and 500 μL of supernatant was analyzed using AMA-254. The procedure is summarized in Fig. 2 . Spreadsheet 1 can be used for quantification. Validation of the methods For the GC-MS method, procedural blanks were evaluated by performing the procedure without a sample every six-eight analysis. For method II, for each cycle of analysis, the L-cysteine solution was analyzed in triplicate as a blank before the samples, and re-analyzed after 6-8 sample analyses to assess the absence of a memory effect (which was not observed). For both methods, the method blank showed concentration below the estimated LOD. To calculate the limit of detection (LOD) and the limit of quantification (LOQ) of the methods, two different approaches were used. For method I, LOD and LOQ were estimated using the signal-to-noise ratio of 3 and 5, respectively. Signal-to-noise ratios were calculated using sample chromatograms, where the baseline signal is uniform for the type of sample considered, confirming that the matrix effect does not affect the detection limits for headspace analysis. For method II, LOD was calculated as mean absorbance of the L-cysteine solution (1%) + 3 times the standard deviation and LOQ as mean absorbance of the L-cysteine solution (1%) + 10 times the standard deviation. Using Eq. (4) and considering a sample weight of 0.2 g, values can be calculated as μg kg −1 . For method I, calculated LOD was 0.7 μg kg −1 and LOQ 1.4 μg kg −1 of MeHg ( Table 2 ). For method II, resulting LOD was 6 μg kg −1 and LOQ 11 μg kg −1 ( Table 2 ). Repeatability was estimated as percent relative standard deviation (RSD), i.e., percent ratio of the standard deviation to the mean, while precision was estimated by analysis of certified reference materials and it was calculated as percent recovery (R), i.e., percent ratio of measured value to certified value. To evaluate these parameters, certified biota and sediment certified reference materials were analyzed ( Table 2 ). For what concerns biological materials, method I showed good agreement with certified values, with recovery ranging between 97.8 and 102.5% and a mean RSD of 5.4%. For method II recovery ranged between 90.8 and 103.4%, while average RDS was 8.4%. For what concerns sediments, analysis of the certified material ERM-CC580 (certified value: 75 ± 4 μg MeHg kg −1 ) allowed to obtain an average of 70.9 ± 6.2 μg MeHg kg −1 for method I, with a mean recovery of 94.5% (RSD = 8.8%) and 67.7 ± 6.4 μg MeHg kg −1 for method II (the Table 2 Values of MeHg (μg kg −1 ) in biological and sediment certified reference materials (mean ± standard deviation) obtained with methods I and II. n = number of analyses, R = mean percent recovery in comparison to certified value, RSD = percent relative standard deviation, * = MeHg value is referred to wet weight (in the other cases to dry weight). Limits of Detection (LOD) and of Quantification (LOQ) for both methods are provided (see text for explanations). GC chromatogram showed no other organomercury forms in the sample), with a recovery of 90.3% (RSD = 9.4%). Testing of the methods on biological and sediment samples Both methods were evaluated for analysis of biological samples collected in the field ( Table 3 ). In particular, the performance was tested by analyzing organisms belonging to different taxonomic groups (i.e., with a different matrix), with different levels of THg and, potentially, with different MeHg:THg ratio [ 20 , 23 , 24 ], i.e., freshwater benthic invertebrates (insects, crustaceans) and fishes. Both methods successfully determined MeHg (the GC chromatogram showed no other organomercury forms in the samples) and accuracy showed RSD ≤ 6.7% for method I and ≤ 13.2% for method II. In fact, the latter method does not seem to be always effective for freshwater sediments, as in many cases the results may be < LOD. To overcome this limitation by increasing MeHg concentrations, some extraction tests were carried out using increasing amounts of the sediment sample "Pallanza littoral" (26 μg kg −1 MeHg according to method I analysis, Table 2 ): 0.4 g, 0.6 g and 0.8 d.w.. However, the values obtained remained below the LOD. Thus, the extraction is likely to be only partial, probably because HBr may not be strong enough to separate organomercury from strong ligands, such as organic matter. Maggi et al. [2] used HCl instead of HBr for the analysis of marine sediments. By using HCl, we observed the development of a strong effervescence reaction, probably due to the release of CO 2 from carbonate dissolution, which probably affected the extraction phase. Again, values for this sample remained below the LOD. Method II showed also some other limitations. In particular, for some fish samples, after the addition of toluene the formation of an emulsion was observed, that limited the recovery capacity of L-cysteine. This drawback was highlighted also by Watanabe et al. [27] and Maggi et al. [2] . Comparison between methods The comparison between methods was carried out considering the available analyses of both certified materials and natural samples ( n = 20). Spearman's rank correlation coefficient was calculated. Then, the non-parametric regression of Passing-Bablok [28] was calculated. Passing-Bablok model requires the least number of assumptions. Extreme values (outliers) can be included, imprecision in both methods is allowed and it is not required that the error be normally distributed nor constant along the range of concentrations. This regression analysis allows to evaluate the presence of a systematic constant error (intercept) or proportional error (slope). To demonstrate the absence of a systematic error, the 95% confidence intervals must include 0 for the intercept and 1 for the slope. Then, Bland and Altman graph can be used to calculate the limits of agreement between methods, basing on the differences between methods [29] . Normality of differences was tested with D'Agostino-Pearson test. Analyses were carried out with MedCalc 19.3 (MedCalc Software Ltd, Ostend, Belgium). Spearman's correlation coefficient resulted 0.988 ( p < 0.001). The non-parametric Passing-Bablok regression showed that the confidence intervals of the intercept include the value 0, proving the absence of constant systematic errors, while the slope of the line is slightly higher than 1 (even if the lower limit of the confidence interval is 1.0010), thus indicating a slight disproportion between the two methods ( Fig. 3 ). Bland and Altman dispersion plot ( Fig. 4 ) shows that the values measured with AMA-254 were slightly lower compared to those determined with GC-MS method. Average percent bias is -7.36% (95% CI: -11.6 to -3.0%), and the agreement interval falls between -25.7 and 11.0% ( Fig. 4 b), i.e., from -39.8 to 19.1 μg kg −1 ( Fig. 4 a). The highest disagreement may be bound to the lowest concentrations ( < 200 μg kg −1 ), since percent difference between methods seems to become lower with increasing concentration ( Fig. 4 b). Table 3 Values of MeHg in biological and sediment samples obtained with methods I and II: n = number of analyses, measured values of MeHg as mean ± standard deviation, RSD = percent relative standard deviation, THg = total mercury ( n = 3), N.D. = not determined, i.e., the method failed determining MeHg in the sample (see text for explanations). Sample Sampling This slight disproportion is also confirmed by the recoveries obtained with analysis of the certified reference materials SRM-1946 and ERM-CC580, which were lower for method II in comparison to method I (-8.0 and -4.4%, respectively) ( Table 2 ). We did not deepen the reasons of this disproportion between methods. Possibly, the extraction of the organic complex methylmercury-cysteine from the extraction solution performed in method II may not be complete, despite the double extraction. Furthermore, liquid-liquid extraction efficiency could be limited by the generation of emulsions, especially in organic matrices [ 2 , 27 ]. Poor recovery may also lower LOD in method II. However, both approaches showed good precision according to recovery in certificate reference materials, and good accuracy according to RDS values, proving to be suitable for MeHg analysis in complex environmental matrices. As expected, method I showed higher sensitivity than method II and thus it is suitable for analysis of MeHg also in sediment samples. Method II can be used to extract all organic forms of mercury. However, given that MeHg in organisms generally represent the totality of organomercury compounds, method II can be used as rapid and cost-effective technique for MeHg analysis in aquatic organisms. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
4,652.2
2021-11-18T00:00:00.000
[ "Environmental Science", "Chemistry" ]