text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
\section{Introduction}
Deep learning is a variety of machine learning structure that automatically extracts features from training data to perform better on many complicated tasks. Deep learning techniques produce better results when combines with other machine learning approaches such as neural networks. The combination of deep learning and neural networks are called deep neural networks (DNNs). Lately, DNN algorithms have obtained state-of-the-art results in multiple ML related domains such as natural processing language, voice recognition, and computer vision.
The most important factor that helps DNN to achieve such outstanding results is leveraging numerous labeled training data.
Based on the 2016 data science report\cite{crowd2016}, most data scientists spend 80\% of their time collecting, preparing, and managing data. Since collecting a large amount of labeled data and providing powerful hardware for training a DNN can be very expensive, most scientists prefer to employ a pre-trained model for their problems. On the other hand, organizations consider a trained DNN as their Intellectual Property (IP) because of the costs of collecting a considerable amount of labeled data and providing powerful hardware for training it.
Pre-trained models help users to develop their specific methods, which is called fine-tuning. Fine-tuning obtains the best set of weights for a trained DNN in a particular problem and uses them as initialization weights for a new model in the same domain. Fine-tuning is an effective technique to speed up the training phase of DNN models, and also it helps overcome the small dataset problem. Fine-tuning assists scientists to build an accurate and high-performance DNN model. A typical adversary can utilize different fine-tuning approaches as a means for redistribution and copyright infringement. Thus, a trained DNN model needs to be protected as an Intellectual Property (IP) from illegal redistribution, reproducing, and derivation. Digital Watermarking is one of the best solutions to protect a trained DNN model from copyright infringement.
Digital watermarking is a technique that embeds different types of data, signal, or information into digital media such as digital image, audio, and video \cite{van1994digital}. Digital watermarking can have several different purposes. These include merely hiding a piece of meaningful information without modifying the host of the watermark or embedding a piece of specific information that can ensure the originality of a digital file by authenticating the embedding content.
Various watermarking algorithms utilize several different approaches to distinguish between original or watermarked media. The watermarking algorithms encrypt the content using various encryption techniques such as block ciphers to avoid revealing the watermark's information to adversaries who have prior knowledge of the watermarking algorithm. Recently, many approaches have been published to watermark the DNNs to protect them from abuse cases. In these methods, the owner of a trained DNN model watermarks his/her model by embedding specific data into the training dataset or modifying some parameters of the model.
To the best of our knowledge, all methods and strategies proposed for watermarking DNNs are focused and evaluated on digital image classification tasks because adding noise to a digital image within a dataset can be very straightforward. In this research, we propose a framework to watermark a DNN model that is trained with textual data. The first stage of the proposed algorithm includes selecting a random data from the training set and adding a certain amount of random noise to the selected data. This set of data is called the \textit{trigger set} and considered as the watermark. After generating the trigger set, the model is trained with a specific combination of this set and original training data. At this step, the trained model is watermarked, which means it returns correct prediction to the ordinary data while it returns a modified response to the trigger set data.
The rest of this paper is organized as follows. Section \ref{RelatedWork} summarizes the important related work that have been proposed to protect DNN models and discusses the related research on watermarking DNN models. In Section \ref{ProposedMethod}, the proposed method for watermarking a textual DNN model is described in detail. The experimental results are presented in Section \ref{ExperimentResults}. Section \ref{Conclusion} provides some concluding remarks and discusses future works.
\section{Related Work} \label{RelatedWork}
In the literature, researchers embed a watermark into a DNN model in three phases: training, fine-tuning, and distillation\cite{adi2018turning}. So, all methods that utilize watermarking to protect a specific DNN are divided into the following three categories:
\begin{itemize}
\item Watermarking the training data.
\item Watermarking neural network's parameters.
\item Watermarking trained model's output.
\end{itemize}
The following sections describe the workflow of the above categories by reviewing the state-of-the-art methods in each section.
\subsection{Watermarking the training data}
The first category of watermarking DNNs refers to those who embed a signature into training data. Zhang et al. \cite{zhang2018protecting} proposed three algorithms for generating digital watermarks as a fingerprint for ownership verification; 1) embedding meaningful content in, for example, inserting a string "TEST" into a car image and labeling it as an airplane. 2) selecting a set of samples from a trivial task, for example, using a handwritten image in the food recognition task. 3) adding a crafted, random, and meaningless noise to some samples. Their method embeds the mentioned watermarks into a target DNNs during the training process and makes the model memorize the watermarks' patterns. Thus, once a DNN is stolen, the owner can easily verify them by sending watermarks as inputs and checking the service output. Their watermarking method was evaluated on two well-known image datasets, MNIST \cite{lecun1998gradient} and CIFAR10 \cite{krizhevsky2009learning}.
Adi et al. \cite{adi2018turning} proposed a simple yet effective method for watermarking DNNs by employing a backdoor. They declared that for watermarking a machine learning model, the following three steps are essential: generating the secret marking key, embedding the watermark into a model, and verifying if a watermark is present in a model or not. However, they utilized multiple trigger set and generated a random bit string for each of them to insert them into the samples and create a backdoor module. In the verification step, they checked whether the predicted labels are the same as watermarked labels. The quality of their framework is evaluated by estimating non-trivial ownership, residence to removability, and functionality preserving.
Guo et al. \cite{guo2018watermarking} proposed a method to watermark DNNs by adding the author's signature into the portion of the training dataset and assigning them different labels.
In the watermark detection and verification stage, they run models on samples, both with and without the signature. If the watermarked model classifies original images correctly and classifies images with n-bit string to the mapped classes, they prove that this model is unquestionably their own watermarked model. They used different evaluation criteria such as effectiveness, fidelity, payload, and false-positive rate. They also claimed that their technique could be applied to multiple datasets and DNN architectures, and their model is robust against ghost signature attack and tampering attack.
Huili Chen et al. \cite{chen2019blackmarks} introduced an approach to watermark a pre-trained model in the black-box scenario. Their approach has two main steps; watermark embedding and watermark extraction. In the watermark embedding step, they only have API access. The author's signature is a binary string that all bits are independent of each other. This process includes two main actions, namely, watermark keys generation and fine-tuning. In the watermark extraction step, the owner queries the DNN with watermark keys and decodes the signature from output results. The decrypted mark and actual secret key are compared together for determining the authorship.
Rouhani et al. \cite{rouhani2018deepsigns} proposed a framework to insert a digital watermark in a trained deep learning model. They also introduced reliability and integrity as new requirements of a watermarking framework. For watermarking a DNN, they introduced two approaches: 1) selecting specific target classes and a subset of their training data, and 2) watermarking the output layer by generating a set of unique random input samples and fine-tuning the target model. Their experimental results showed that their approach satisfied all the watermarking requirements and can resist model pruning, fine-tuning, and watermark-overwriting.
\afterpage{%
\clearpag
\thispagestyle{empty
\begin{landscape}
\begin{table}[t]
\centering
\caption{Some of the current deep neural network watermarking methods.}
\label{lit_table}
\scriptsize
\begin{tabular}{|p{2.2cm}|p{5cm}p{5.2cm}p{5cm}p{1.1cm}p{1.1cm}p{1cm}|}
\hline
& & & & Access & Access & Access \\
Category & Algorithm & Robustness against & Evaluation metric & Model & Model & Training \\
& & & & Architecture & Parameters & Data \\\hline
Watermarking the training data
& Turning your weakness into a strength: Watermarking deep neural networks by backdooring \cite{adi2018turning} & Model fine-tuning & Model accuracy (0-1) & Not \newline Applicable & Black-box & Applicable \\ \cline{2-7}
& Watermarking deep neural networks for embedded systems \cite{guo2018watermarking} & Model fine-tuning & Effectiveness, Fidelity and payload with regard to embedding watermarks, and false positive rate with regard to decoding watermarks. & Not \newline Applicable & Black-box & Applicable \\ \cline{2-7}
& BlackMarks: Blackbox Multibit Watermarking for Deep Neural Networks \cite{chen2019blackmarks} & Brute-force, parameter pruning & Accuracy, BER, Detection Success, Overwriting attacks & Applicable & White-box & Applicable \\ \cline{2-7}
& Deepsigns: A generic watermarking framework for ip protection of deep learning models \cite{rouhani2018deepsigns} & Model fine-tuning, Parameter pruning, Watermark overwriting, lossy compression, cropping, resizing, & Accuracy of Marked and Baseline Model & Applicable & Black-box, White-box & Applicable \\ \cline{2-7}
& Embedding watermarks into deep neural networks \cite{uchida2017embedding} & Model compression, fine-tuning and distilling & Test error (\%) and embedding loss ER(w) with and without embedding, Test error (\%) and embedding loss ER(w) with andwithout embedding in fine-tuning and distilling & Not \newline Applicable & Black-box & Applicable \\ \cline{2-7}
& Black-Box Watermarking for Generative Adversarial Networks\cite{skripniuk2020black} & Deepfakes and responsibility tracking of GAN misuse, Backdoor attacks, perturbation attacks & Thresholding on the bitwise accuracy, FID comparisons & Not \newline Applicable & Black-box & Applicable \\ \cline{2-7}
& Watermarking Deep Neural Networks in Image Processing\cite{quan2020watermarking} & Compression attacks, Model fine-tuning & PSNR (dB)/WPSNR & Not \newline Applicable & Black-box & Applicable \\ \cline{2-7}
& Evolutionary Trigger Set Generation for DNN Black-Box Watermarking \cite{guo2019evolutionary} & Fine-tune attacks & The Key and Logo trigger pattern on different datasets & Not \newline Applicable & Black-box & Applicable \\ \cline{2-7}
& Entangled Watermarks as a Defense against Model Extraction\cite{jia2020entangled} & Retraining-based extraction attacks & validation accuracy and watermark success rates(based on cross-entropy of watermarks with target class) & Applicable & White-box & Applicable \\ \cline{2-7}
& Training DNN Model with Secret Key for Model Protection \cite{aprilpyone2020training} & Brute-force and Fine-tune attacks & Image classification experiments with a batch size of 128 and live augmentation & Not \newline Applicable & Black-box & Not \newline Applicable \\ \cline{2-7}
& Piracy Resistant Watermarks for Deep Neural Networks \cite{li2019piracy} & Piracy Resistant,Corruption, Takeover & Normal classification accuracy and watermark accuracies when adversary tries to embed a pirate watermark into owner's model & Not \newline Applicable & Black-box & Not \newline Applicable \\ \hline
Watermarking the training data and
& Protecting intellectual property of deep neural networks with watermarking \cite{zhang2018protecting} & Brute-force attacks,model inversion attack, counter-watermark attacks & Testing and watermarking accuracy based on different pruning rates & Applicable & Black-box, White-box & Applicable \\ \cline{2-7}
NN's parameters & Digital watermarking for deep neural networks \cite{nagai2018digital} & Distillation attack, nst parameter pru, Model fine-tuning, lossy compression, cropping, resizing, & Test error, Embedding loss, Bit error rate & Applicable & Black-box & Applicable \\ \cline{2-7}
& Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks \cite{fan2019rethinking} & network modifications and resilient to ambiguity attacks,fine-tuned, Model pruning & Detection/Classification accuracy (in \%) of different passport networks where BN = batch normalization and GN = group normalization & Applicable & Black-box, White-box & Applicable \\ \cline{2-7}
& DeepStego: Protecting the Intellectual Property of Deep Neural Networks by Steganography \cite{zhang2018protecting} & Brute-force attacks, model inversion attack, counter-watermark attacks & Testing and watermarking accuracy based on different pruning rates & Applicable & Black-box, White-box & Applicable \\ \hline
Watermarking NN's parameters
& DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models\cite{chen2019deepmarks} & model fine-tuning, parameter pruning, fingerprint collusion, and fingerprint overwriting attacks & BIBD AND-ACC codebook that accommodates users, Fine-tune without and with fingerprint & Applicable & White-box & Applicable \\ \cline{2-7}
& Robust Watermarking of Neural Network with Exponential Weighting \cite{namba2019robust} & Query modification, & Test accuracy of models without watermarks and models watermarked by existing and proposed methods under watermark invalidation in four datasets. & Not \newline Applicable & Black-box & Applicable \\ \cline{2-7}
& Adversarial frontier stitching for remote neural network watermarking \cite{le2020adversarial} & Model compression (via both pruning and singular value decomposition) and overwriting via fine-tuning & Accuracy with respect to different pruning rates & Applicable & White-box & Applicable \\ \cline{2-7}
& Robust and Undetectable White-Box Watermarks for Deep Neural Networks \cite{wang2019robust} & Model fine-tuning , parameter pruning, watermarkoverwriting, property inference attack & Accuracy Confidence Intervals and Embedding Loss & Applicable & White-box & Applicable \\ \hline
Watermarking trained model's output and NN's parameters
& Deep Neural Network Fingerprinting by Conferrable Adversarial Examples \cite{lukas2019deep} & Distillation attacks,fine-tuning, ensemble attacks, adversarial trainingand stronger adaptive attacks & fingerprint accuracy and fingerprint retention (verification accuracy) & Not \newline Applicable & Black-box & Applicable \\ \hline
Watermarking trained model's output
& DAWN: Dynamic Adversarial Watermarking of Neural Networks \cite{szyller2019dawn} & Model extraction Attacks (IP Theft, KnockOff), Poisoning & Accuracy with respect for different epocs & Not \newline Applicable & Black-box & Not \newline Applicable \\ \cline{2-7}
& Watermarking the outputs of structured prediction with an application in statistical machine translation \cite{venugopal2011watermarking} & local editing operations & Baseline method , Rank interpolation, Cost interpolation, BLEU loss & Applicable & White-box & Not \newline Applicable \\ \hline
\end{tabular}
\end{table}
\end{landscape}
\clearpag
}
\subsection{Watermark neural network's parameters}
This category of watermarking approach is focused on the structure of DNNs by modifying the parameters of a specific layer in a neural network. These approaches need to have white-access to the neural network. Nagai et al. \cite{nagai2018digital} introduced a digital watermarking technology for authorship authentication of DNNs. They embedded a watermark into the model in three different situations: training, fine-tuning, and distilling situation. They formulated watermarking as embedding a T-bit vector as a secret key in one or more layers of a neural network. The secret key is generated in three different ways; direct, difference and random. The main difference between these three ways is how they choose parameters and layers for modification. By using the secret key, they could embed and detect a watermark in a DNN.
Based on Huili Chen et al. \cite{chen2019deepmarks}, the two main requirements for copyright protection techniques include ownership proof and tracking individual users. Watermarking techniques and fingerprints are usually a proper solution for copyright protection cases. However, watermarking satisfied the first requirement; in contrast, the fingerprinting can simultaneously address both prerequisites. They proposed a fingerprinting framework named \textit{DeepMarks} that generates a binary code vector for each user and then embedding it in the one or more layer's parameters. Their approach has two steps; 1) generating a unique fingerprint for each user, and 2) inserting the user's signature in selected layers of a pre-trained model by using a secret matrix. For fingerprint extraction, they extracted the code vector from the model and compare it to the codebook column. They claimed that \textit{DeepMarks} is robust against parameter pruning, model fine-tuning, fingerprint collusion attack, and fingerprint overwriting.
\subsection{Watermark trained model's output}
The category includes methods that focus on the output of a trained model instead of modifying training data or neural network parameters. Sebastian Szyller et al. \cite{szyller2019dawn} stated that existing watermark techniques focus on marking the models, which make their method vulnerable to model extraction attacks. Their method concentrate on watermarking the output instead of inputs, which means their approach watermarks a subset of API responses instead of watermarking a model. They introduced DAWN (Dynamic Adversarial Watermarking of Neural Networks) approach to use watermarking to deter model extraction IP theft. DAWN dynamically changes the responses for a small subset of queries (e.g., $1-2^{-64}$), and incurs a negligible loss of prediction accuracy (for instance, $0.03-0.5\%$).
Venugopal et al. \cite{venugopal2011watermarking} proposed a method to watermark the outputs of machine learning models, especially machine translation, to be distinguished from the human-generated productions. For watermarking, they used a random hashing operation by producing a fixed-length bit sequence. To detect the watermark, they used hashing operation on the outputs and found the bit sequences. The authors claimed that their methods are robust, uniformly distributed, and can distinguish between two generated data. Their results show that watermarking outputs have high recall and minimal quality degradation, and their producer can be identified.
\subsection{Attacks on Watermarking approaches}
The primary goal of watermarking a DNN scheme is to protect pre-trained models against intellectual property thefts and copyright infringement. However, the current algorithms suffer from some potential threats. There are many attacks designed to reveals the weaknesses of watermarking algorithms. In this section, we review some of the notable works proposed in this area.
Tianhao Wang et al. \cite{wang2019attacks} proposed an attack on watermarked DNNs. Their attacks are specially applied to watermarking approaches that modify the weights of the watermarked model, such as the model proposed by Uchida et al.\cite{uchida2017embedding}. Their attack is based on the variance of the model parameter distribution. They stated that the standard deviation of weights increased significantly during the process of watermark embedding. They proposed two different approaches to remove watermark: 1) Overwriting, which uses a single watermark and matrix in several epochs of training, and 2) Multi-embedding, which use different matrices and watermarks in each step of learning. L2 regularization is utilized to deflate the variance of the weights, which is the same as a non-watermarked model.
Shafieinejad et al. \cite{shafieinejad2019robustness} proposed three different attacks for backdoor-based watermarking: black-box, white-box, and property inference attack. In a black-box attack, they queried the watermarked model and used queries' output as labeled data. Their proposed white-box attack is inspired by fine-pruning techniques and has two subsection regularizations and fine-tuning. In the property inference attack, authors tried to detect whether a trained model is watermarked with backdoor approaches by extracting a feature vector from models and some parts of the training data. The experimental result shows that these three attacks could entirely remove backdoor-based watermarking.
Ryota Namba et al. \cite{namba2019robust} presented a novel attack method against query modification watermarking by changing the training data and finding a trigger set for watermark validation. Their attack has two steps, 1) key sample (trigger set) detection by measuring the changes after and before applying Autoencoder and 2) query modification for invalidating watermark. The authors also proposed a new watermarking method called \textit{exponential weighting}, which is robust against their attack method. Their proposed method recognized the model parameters that significantly affected prediction and increased their weight value exponentially. Finally, they demonstrated that their defense algorithm withstands under several different attacks, especially query and model modification.
\section{The Proposed Method} \label{ProposedMethod}
Text processing is one of the most common tasks in many machine learning domain with many applications in language translation, sentiment analysis, and spam filtering. Before using DNNs in text processing, many text processing tasks get stuck in a local optimum. Nowadays, DNNs significantly improve the performance of all text processing tasks. For instance, DNNs-based text classification plays a very critical role in understanding and analyzing the information itself. Since protecting the trained model in text processing is essential, preserving the trained models became a vital task for different industries and researchers. Hence, this paper proposes a framework for securely watermarking a textual DNN model.
The three main components of the proposed method are watermark generation, watermark embedding, and watermark verification. In the watermark generation step, the content of some selected documents is changed and assigned with a new label. In the watermark embedding step, the watermarked documents are embedded in a trained model. These textual documents are called trigger sets, which are used in the watermark verification stage. In the watermark verification step, the model's ownership is examined using the trigger set generated in the first step. Figure \ref{fig:watermarkgen_EV} and the following step show the main workflow of the proposed DNN watermarking framework. The following sections describe each step in detail.
\begin{figure}[t]
\centering
\includegraphics[width=0.65\textwidth]{img/Text_WM_EV.png}
\caption{The workflow of the proposed DNN watermarking framework for textual data.}
\label{fig:watermarkgen_EV}
\end{figure}
\begin{itemize}
\item Step 1: Randomly select $B$ samples from test data, and remove the stop words from them.
\item Step 2: Calculate the TF-IDF score for each word in all documents.
\item Step 3: For each selected document, randomly select one document from another class to exchange their words and producing a watermark record.
\item Step 4: Select K words of both documents with lowest TF-IDF score.
\item Step 5: Exchange the selected words and swap the labels of two documents.
\item Step 6: Insert the modified documents into the trigger set.
\item Step 7: We repeat these steps (steps 3-6) until we meet all selected documents.
\item Step 8: Combine the existing training set with the generated trigger set to form new training data.
\item Step 9: Train the DNN model with the new training data to achieve the proposed watermarked model.
\end{itemize}
\begin{figure*}[h]
\centering
\includegraphics[width=0.99\textwidth]{img/Text_WM.png}
\caption{Steps of proposed watermark generation framework for textual data.}
\label{fig:watermarkgen}
\end{figure*}
\subsection{Watermark Generation}
The proposed algorithm in this paper generates a unique watermark to represent the owner's signature. The proposed algorithm securely watermarks the DNN model by utilizing an effective score called TF-IDF. This watermark is robust against several important attacks, such as reverse engineering methods. TF-IDF is a numerical statistic designed to rank essential words in a document based on their frequency. This score is a combination of Term Frequency (TF) \cite{luhn1957statistical} and Inverse Document Frequency (IDF) \cite{sparck1972statistical}. The TF-IDF score of a word $w$ of document $D$ can be calculated as below:
\begin{equation}
S_{w,D} = tf_{w,D} \times \log \frac{N}{df_w},
\end{equation}
where $tf_{w_D}$ is the frequency of $w$ in $D$, $df_{w}$ is the number of documents contain $w$, and $N$ is the total number of documents. By using this score, we can rank all words in a document based on their importance.
Figure \ref{fig:watermarkgen} describes the proposed watermarking generation scheme. To generate a watermark, we randomly select $B$ documents for each class $C_i$ from the training set, $S_{C_i} = \{(D_j,Y_j) | Y_j=C_i \}_{j=1}^B \in {D}_{training}$. To create a fair and balanced trigger set, the number of samples selected from each class is equal. Then, we uniform the words by changing them to lowercase, and removing the punctuation and stop words. We calculate the TF-IDF score of each word, $w_m$, in those documents and sort all the words by their scores.
Each document can be represented by its sorted words, $D_j = \{w_1, w_2, ..., w_m, w_{m+1},..., w_{n}\}$ where $n$ is the length of $D_j$ and TF-IDF$(w_m) \leq $ TF-IDF$(w_{m+1})$.
For each document, $(D_j, Y_j) \in S_{C_i}$, we select $K$ words with the lowest TF-IDF values, $\{w_1, w_2, ..., w_K\} \in D_j$.
We choose a sample from the other class randomly, $(D'_j,Y'_j) \in S_{C_{i'}}$ to exchange their lowest scored words, $D_j = \{w'_1, w'_2, .., w'_{K-1}, w'_K, w_{K+1}, ..., w_{n}\}$ and $D'_j = \{w_1, w_2, .., w_{K-1}, w_K,w'_{K+1}, .., w'_{n'}\}$. We exchange the labels of the two mentioned documents finally, $Y_j = C_{i'}$ and $Y'_j = C_i$. By doing these steps, we created watermarks that consist of a set of modified documents with the incorrect labels assigned to
them. This set is called a trigger set, $T = \{(D_j, Y'_j) , (D'_j, Y_j)\}_{j=1}^{B}$. Figure \ref{fig:watermarkgen_example} shows an instance of the original and watermarked document by the proposed method. The modified version can be used as the trigger set in the embedding stage. In this example, 16 words with the lowest score are selected from the original document and randomly replaced with the lowest score words extracted from a different document.
\begin{figure*}[h]
\centering
\begin{center}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=0.99\textwidth]{img/original.png}
\caption{Original text}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=0.99\textwidth]{img/watermarked.png}
\caption{Modified text as a sample of trigger set. }
\end{subfigure}
\end{center}
\caption{An example of watermark generation: 16 less important words are selected from original text(a), and randomly replaced with 16 less important words of another document to generate a sample of trigger set(b). }
\label{fig:watermarkgen_example}
\end{figure*}
\subsection{Watermark Embedding}
Embedding a watermark into a DNN model can be done in one of the following three steps; 1) at the training time, 2) at the fine-tuning step, and 3) at the distillation step. We embed generated watermark data into the DNN model by inserting them in the proposed framework's the training step. We append the trigger set samples, list of modified documents, to the training data, and train the NN model to make it memorize the incorrectly assigned label to these samples. During the training step, our DNN tries to learn the labels of correct samples and memorize watermark examples. Therefore, the watermarks are embedded in the newly trained model.
Algorithm \ref{WGE_algorithm} shows the pseudocode of the proposed DNN watermark generation and embedding approach.
After inserting watermark data into the DNN model, we must ensure that the model's performance did not decrease and that watermarked data are embedded in the model correctly. In Section \ref{ExperimentResults}, we define an experiment and evaluation metrics to prove that this expectation has been satisfied.
\begin{algorithm}[t]
\footnotesize
\SetAlgoLined
\DontPrintSemicolon
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{$D = \{D_j,Y_j\}^N_{j=1}$: Original Training Set with $N$ documents\;}
\Output{$M$: The trained DNN model \;
$\space$ $\space$ $\space$ $\space$ $\space$ $\space$ $\space$ $\space$ $\space$ $\space$ $T$: Trigger Set}
$T = \{\}$\;
\For{each class, $C_i$}{
$S_{C_i}=\{$select random $(D_j,Y_j)\in D\|Y_j=C_i\}^B_{j=1}$\;
\For{each sample $(D_j,Y_j) \in S_{C_i} $}{
Normalize $D_j$\;
Calculate TF-IDF score for words in $D_j$\;
Sort words of $D_j$ ascending, $\{w_m\}_{m=1}^n$ where $n$ is the length of $D_j$\;
Select randomly $(D'_j,Y'_j) \in D$ where $Y'_j\ne Y_j$\;
Normalize $D'_j$\;
Calculate TF-IDF score for words in $D'_j$\;
Sort words of $D'_j$ ascending, $\{w'_m\}_{m=1}^{n'}$ where $n'$ is the length of $D'_j$\;
\For{$k = 1$ to $K$}{
$w_k \Longleftrightarrow w'_k$
}
$D$ = $D - \{(D_j, Y_j), (D'_j, Y'_j)\}$\;
$T$ = $T \cup \{(D_j, Y'_j), (D'_j, Y_j)\}$\;
}
}
$M$ = Train ($D \cup T$)\;
\Return{$M$ , $T$}
\caption{Watermark Generation and Embedding}\label{WGE_algorithm}
\end{algorithm}
\subsection{Watermark Verification}
In watermark verification, the ownership of a trained model should be verified. It means if an adversary creates a surrogate model without the owner's permission and provides an online API to service other users, we need a function to prove that his/her model is a surrogate. Therefore, we need to verify the ownership of this model. To verify the ownership, we send watermarked documents to the model. In case the predicted labels for the watermarked documents are the same as the expected labels (changed label), it can be verified that the trained model is derived from the correct model. It should be noted that the remote surrogate model may have modified to remove the watermark, so it is needed to send all documents in the trigger set to the remote API. Also, we must define a threshold, $\theta$, for ownership verification.
\section{Experiment Results}\label{ExperimentResults}
This section shows the results of the proposed watermarking algorithm for textual DNN models. The datasets used for analyses and the obtained result in watermark embedding and verification are described in the following.
\subsection{Dataset}
To evaluate the proposed watermarking approach, we use two well-known datasets in the text processing but for different problems;
\begin{itemize}
\item \textit{IMDB users' reviews}: IMDB users' reviews dataset, which is a well-known dataset in the Kaggle competition, is our first dataset. This dataset has two classes that show the polarity of each user's comment. It contains 25000 training samples and 25000 testing records. It is a standard benchmark for sentiment analysis problems.
\item \textit{HamSpam}: HamSpam as a well-known dataset in the spam detection area. This dataset is one of the most useful datasets in the spam detection task. It consists of 5728 email contents categorized into two main classes named \textit{Ham} and \textit{Spam}. It has 4360 samples for \textit{Ham} and 1368 samples for \textit{Spam}.
\end{itemize}
Both datasets are part of the Kaggle competition, and there are many methods developed based on these two corpora. Table \ref{db_tbl} summarizes the characteristics of selected real-world datasets.
\begin{table}[h]
\centering
\small
\caption{Summary of data sets used in experiments}
\label{db_tbl}
\begin{tabular}{|c | c | c | c|}
\cline{3-4}
\multicolumn{2}{c|}{$ $} & \multirow{1}{*}{\textbf{IMDB}} & \multirow{1}{*}{\textbf{HamSpam}} \\
\hline
\multicolumn{2}{|c|}{Problem} & Sentiment Analysis & Spam Detection\\ \hline
\multicolumn{2}{|c|}{Document length} & 231 & 19 \\\hline
Number of & train set & 24750 & 4317 \\\cline{2-4}
positive & test set & 250 & 43 \\ \cline{2-4}
samples & trigger set & 100 & 50 \\\hline
Number of & train set & 24750 & 1352 \\\cline{2-4}
negative & test set & 250 & 13 \\\cline{2-4}
samples & trigger set & 100 & 50 \\\hline
\end{tabular}
\end{table}
\subsection{Evaluation Metrics} \label{Evaluation_Metrics}
There are a set of evaluation criteria that are widely used in the \textcolor{black}{literature} \cite{rouhani2018deepsigns,guo2018watermarking,quan2020watermarking} to show the effectiveness of a robust watermarking of DNNs:
\begin{enumerate}
\item \textit{Fidelity:} The performance of the target model should not be noticeably decreased as a result of watermark embedding.
\begin{equation}
\mu(M(x; \theta^{*})) \approx \mu(M(x; \theta_0 )), s.t. \forall x \in X,
\end{equation}
where $M(x; \theta_0)$ and $M(x; \theta^{*})$ denote the original DNN and watermarked model, respectively. $X$ is a set of documents used to train the original model, and $\mu(.)$ denotes any performance metrics such as accuracy, validation loss, or F1 score.
\item \textit{Integrity:} The ownership of the unmarked models must not be falsely claimed, i.e., the false alarm rate of watermark extraction should be minimum.
\item \textit{Credibility:} The false negative rate of detecting embedded watermarks should be minimum. This is an essential requirement because the model owner needs to detect any misuse of her model with high probability. The watermarks can be effectively detected using the trigger set.
\begin{equation}
\forall x_t \in T : A(x_t,\theta')=M(x_t,\theta^{*}) \iff A = M , \theta'= \theta^{*}
\end{equation}
where $A(.,\theta')$ is any useful DNN model for the same task and $M(.,\theta^{*})$ is the watermarked model. If the output of $A(.,\theta')$ is exactly the same as the output of $M(.,\theta^{*})$, then two models are the same and the claim of ownership is valid. Otherwise, fraudulent claims of ownership can be made.
\item \textit{Robustness:} Embedded watermark should be extracted after pruning, fine-tuning, and other model modifications.
\begin{equation}
\mu(M(T; \theta^{*}+\varepsilon)) \approx \mu(M(T; \theta^{*} )),
\end{equation}
where $T$ denotes the trigger set and $\varepsilon$ is a small perturbation on the watermarked model's parameters ($\theta^*$).
\item \textit{Efficiency:} The computational complexity of the embedding and extraction of the watermark should be insignificant.
\item \textit{Security:} The robustness of the watermarking algorithm against attacks such as brute-force is essential. Leaving evidence in the targeted neural network can result in detecting or removing the watermark by a malicious actor.
\end{enumerate}
\subsection{Results}
Since the proposed method is based on word swapping, word selection strategy plays a significant role in the model's performance and robustness. We claim that the watermarked model is more robust and performs better when exchanging words with the lowest TF/IDF of two documents in the trigger set. To examine this hypothesis, we evaluate the proposed model by considering the following two strategies for exchanging the documents' words in the watermark generation stage.
\begin{table}[t]
\centering
\small
\caption{Fidelity score for watermarked and non-watermarked models.}
\label{table:fidelity}
\begin{tabular}{|c|lll|}\hline
& Accuracy & ASC & DES \\ \cline{2-4}
\textbf{IMDB} & Original Model & 93.5\% & 93.5\% \\
& Watermarked Model & 92.02\% & 91.8\% \\ \hline \hline
& Accuracy & ASC & DES \\\cline{2-4}
\textbf{SpamHam} & Original Model & 98.3\% & 98.3\% \\
& Watermarked Model& 97.5\% & 97.8\% \\\hline
\end{tabular}
\end{table}
\begin{itemize}
\item Selecting the least important words (ASC): for this strategy, we sort the terms in ascending order according to their TF-IDF scores in both documents and then exchange the K=80 top words with each other.
\item Picking the most important terms (DES): in this strategy, we replace the least significant word between two documents. So, we sort the phrases of each sample in descending order according to their TF-IDF values and replace the K=80 top words.
\end{itemize}
We selected a different number of samples as a trigger set for HamSpam and IMDB datasets based on their sizes: 100 samples from HamSpam (B=50) and 200 samples from IMDB (B=100).
\begin{figure}[t]
\begin{center}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=0.99\textwidth]{img/TrainingLoss_IMDB80.png} \caption{Training loss}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=0.99\textwidth]{img/validationLoss_IMDB80.png}
\caption{Validation Loss}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=0.99\textwidth]{img/Accuracy_IMDB80.png}
\caption{Accuracy}
\end{subfigure}
\end{center}
\caption{Comparing Training loss, validation loss and accuracy of original and watermarked models considering IMDB dataset.}
\label{Comparing_training_validation_accuracy_IMDB}
\end{figure}
\begin{table}[ht]
\centering
\footnotesize
\caption{The performance of original DNN model on \textit{IMDB} dataset in terms of training loss, validation loss, accuracy, precision, recall, and F1.}
\label{table:original_IMDB}
\begin{tabular}{|p{1cm}|p{1.8cm}p{2cm}p{1cm}p{1cm}p{1cm}p{1cm}|}
\hline
\multirow{1}{*}{epoch} & Training loss & Validation loss& Accuracy & Precision & Recall & F1 \\
\hline
1 & 0.406 & 0.338 & 0.852 & 0.82 & 0.892 & 0.855 \\ \rowcolor[HTML]{C0C0C0} \hline
2 & 0.379 & 0.324 & 0.855 & 0.817 & 0.906 & 0.859 \\
3 & 0.302 & 0.252 & 0.898 & 0.873 & 0.927 & 0.899 \\ \rowcolor[HTML]{C0C0C0}
4 & 0.238 & 0.216 & 0.912 & 0.891 & 0.935 & 0.912 \\
5 & 0.208 & 0.212 & 0.914 & 0.895 & 0.933 & 0.913 \\ \rowcolor[HTML]{C0C0C0}
6 & 0.194 & 0.203 & 0.919 & 0.907 & 0.929 & 0.918 \\
7 & 0.199 & 0.197 & 0.923 & 0.92 & 0.922 & 0.921 \\ \rowcolor[HTML]{C0C0C0}
8 & 0.184 & 0.196 & 0.928 & 0.917 & 0.937 & 0.927 \\
9 & 0.156 & 0.195 & 0.928 & 0.928 & 0.926 & 0.927 \\ \rowcolor[HTML]{C0C0C0}
10 & 0.157 & 0.198 & 0.926 & 0.917 & 0.935 & 0.925 \\
11 & 0.132 & 0.199 & 0.927 & 0.918 & 0.935 & 0.926 \\ \rowcolor[HTML]{C0C0C0}
12 & 0.109 & 0.212 & 0.929 & 0.917 & 0.939 & 0.928 \\
13 & 0.095 & 0.208 & 0.929 & 0.919 & 0.938 & 0.928 \\ \rowcolor[HTML]{C0C0C0}
14 & 0.085 & 0.222 & 0.93 & 0.926 & 0.932 & 0.929 \\
15 & 0.062 & 0.239 & 0.93 & 0.923 & 0.935 & 0.929 \\ \rowcolor[HTML]{C0C0C0}
16 & 0.063 & 0.235 & 0.932 & 0.919 & 0.944 & 0.931 \\
17 & 0.062 & 0.232 & 0.932 & 0.929 & 0.932 & 0.931 \\ \rowcolor[HTML]{C0C0C0}
18 & 0.054 & 0.236 & 0.932 & 0.93 & 0.93 & 0.93 \\
19 & 0.047 & 0.236 & 0.929 & 0.92 & 0.936 & 0.928 \\ \rowcolor[HTML]{C0C0C0}
\hline
\end{tabular}
\end{table}
\subsubsection{Fidelity} This metric shows the degree in which embedding a watermark affects the original model's performance. Table \ref{table:fidelity} shows the accuracy of watermarked and non-watermarked models for two datasets in two different strategies, selecting samples by ascending or descending TF-IDF scores. As the result shows, the accuracy of the watermarked model is very close to the non-watermarked model. So, we can claim that the proposed watermarking method in this research does not impair DNN performance. As Table \ref{table:fidelity} shows, both strategies embedded the watermark into the model without decreasing the performance.
Figures \ref{Comparing_training_validation_accuracy_IMDB} and \ref{Comparing_training_validation_accuracy_HamSpam} illustrate the comparison between the original model and the ascending watermarking model in terms of training loss, validation loss and accuracy.
The proposed watermarking method does not impair the accuracy, along with not having any negative impact on precision, recall, and F1 score. Tables \ref{table:original_IMDB}, \ref{table:watermerked_IMDB}, \ref{table:Original_HamSpam} and \ref{table:Watermarked_HamSpam} show the performance of original DNN and watermarked DNN models on IMDB and HamSpam datasets, respectively.
\begin{figure}[t]
\begin{center}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=0.99\textwidth]{img/TrainingLoss_SpamHam.png}
\caption{Training loss}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=0.99\textwidth]{img/validationLoss_SpamHam.png}
\caption{Validation Loss}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=0.99\textwidth]{img/Accuracy_SpamHam.png}
\caption{Accuracy}
\end{subfigure}
\end{center}
\caption{Comparing training loss, validation loss and accuracy of original and watermarked models considering HamSpam dataset.}
\label{Comparing_training_validation_accuracy_HamSpam}
\end{figure}
\begin{table}[t]
\centering
\footnotesize
\caption{The performance of watermarked DNN model on \textit{IMDB} dataset in terms of training loss, validation loss, accuracy, precision, recall, and F1.}
\label{table:watermerked_IMDB}
\begin{tabular}{|p{1cm}|p{1.8cm}p{2cm}p{1cm}p{1cm}p{1cm}p{1cm}|}
\hline
\multirow{1}{*}{epoch} & Training loss & Validation loss& Accuracy & Precision & Recall & F1 \\
\hline
1 & 0.372 & 0.326 & 0.858 & 0.859 & 0.858 & 0.858 \\ \rowcolor[HTML]{C0C0C0} \hline
2 & 0.339 & 0.312 & 0.869 & 0.892 & 0.841 & 0.866 \\
3 & 0.273 & 0.23 & 0.911 & 0.914 & 0.908 & 0.911 \\ \rowcolor[HTML]{C0C0C0}
4 & 0.231 & 0.206 & 0.917 & 0.906 & 0.932 & 0.919 \\
5 & 0.168 & 0.199 & 0.923 & 0.923 & 0.924 & 0.924 \\ \rowcolor[HTML]{C0C0C0}
6 & 0.167 & 0.204 & 0.921 & 0.913 & 0.931 & 0.922 \\
7 & 0.137 & 0.214 & 0.921 & 0.924 & 0.918 & 0.921 \\ \rowcolor[HTML]{C0C0C0}
8 & 0.111 & 0.246 & 0.917 & 0.903 & 0.935 & 0.919 \\
9 & 0.075 & 0.282 & 0.911 & 0.91 & 0.913 & 0.911 \\ \rowcolor[HTML]{C0C0C0}
10 & 0.063 & 0.314 & 0.911 & 0.932 & 0.888 & 0.909 \\
11 & 0.044 & 0.334 & 0.916 & 0.914 & 0.92 & 0.917 \\ \rowcolor[HTML]{C0C0C0}
12 & 0.038 & 0.347 & 0.914 & 0.921 & 0.907 & 0.914 \\
13 & 0.028 & 0.322 & 0.918 & 0.915 & 0.922 & 0.919 \\ \rowcolor[HTML]{C0C0C0}
14 & 0.017 & 0.369 & 0.913 & 0.906 & 0.924 & 0.915 \\
15 & 0.014 & 0.358 & 0.919 & 0.914 & 0.927 & 0.92 \\ \rowcolor[HTML]{C0C0C0}
16 & 0.013 & 0.363 & 0.92 & 0.92 & 0.921 & 0.92 \\
17 & 0.012 & 0.365 & 0.916 & 0.931 & 0.9 & 0.915 \\ \rowcolor[HTML]{C0C0C0}
18 & 0.024 & 0.431 & 0.917 & 0.887 & 0.957 & 0.921 \\
19 & 0.011 & 0.372 & 0.92 & 0.905 & 0.94 & 0.922 \\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\footnotesize
\caption{The performance of original DNN model on \textit{SpamHam} dataset in terms of training loss, validation loss, accuracy, precision, recall, and F1.}
\label{table:Original_HamSpam}
\begin{tabular}{|p{1cm}|p{1.8cm}p{2cm}p{1cm}p{1cm}p{1cm}p{1cm}|}
\hline
\multirow{1}{*}{epoch} & Training loss & Validation loss& Accuracy & Precision & Recall & F1 \\
\rowcolor[HTML]{C0C0C0} \hline
1 & 0.311 & 0.139 & 0.967 & 0.895 & 0.994 & 0.972 \\ \hline
2 & 0.193 & 0.1 & 0.979 & 0.938 & 0.987 & 0.977 \\ \rowcolor[HTML]{C0C0C0}
3 & 0.153 & 0.062 & 0.979 & 0.933 & 0.994 & 0.981 \\
4 & 0.09 & 0.041 & 0.988 & 0.962 & 0.994 & 0.987 \\ \rowcolor[HTML]{C0C0C0}
5 & 0.05 & 0.036 & 0.99 & 0.974 & 0.987 & 0.984 \\
6 & 0.042 & 0.024 & 0.993 & 0.987 & 0.987 & 0.987 \\ \rowcolor[HTML]{C0C0C0}
7 & 0.031 & 0.025 & 0.993 & 0.981 & 0.994 & 0.991 \\
8 & 0.021 & 0.019 & 0.991 & 0.981 & 0.987 & 0.986 \\ \rowcolor[HTML]{C0C0C0}
9 & 0.018 & 0.019 & 0.993 & 0.981 & 0.994 & 0.991 \\ \hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\footnotesize
\caption{The performance of watermarked DNN model on \textit{SpamHam} dataset in terms of training loss, validation loss, accuracy, precision, recall, and F1.}
\label{table:Watermarked_HamSpam}
\begin{tabular}{|p{1cm}|p{1.8cm}p{2cm}p{1cm}p{1cm}p{1cm}p{1cm}|}
\hline
\multirow{1}{*}{epoch} & Training loss & Validation loss& Accuracy & Precision & Recall & F1 \\
\rowcolor[HTML]{C0C0C0} \hline
1 & 0.282 & 0.112 & 0.976 & 0.976 & 0.976 & 0.972 \\ \hline
2 & 0.181 & 0.096 & 0.977 & 0.977 & 0.977 & 0.977 \\ \rowcolor[HTML]{C0C0C0}
3 & 0.172 & 0.081 & 0.974 & 0.974 & 0.974 & 0.981 \\
4 & 0.097 & 0.072 & 0.972 & 0.972 & 0.972 & 0.987 \\ \rowcolor[HTML]{C0C0C0}
5 & 0.057 & 0.066 & 0.976 & 0.976 & 0.976 & 0.984 \\
6 & 0.042 & 0.086 & 0.972 & 0.972 & 0.972 & 0.987 \\ \rowcolor[HTML]{C0C0C0}
7 & 0.025 & 0.084 & 0.976 & 0.976 & 0.976 & 0.991 \\
8 & 0.012 & 0.085 & 0.976 & 0.976 & 0.976 & 0.986 \\ \rowcolor[HTML]{C0C0C0}
9 & 0.013 & 0.088 & 0.976 & 0.976 & 0.976 & 0.991 \\ \hline
\end{tabular}
\end{table}
\subsubsection{Credibility}
This metric illustrates how the trigger set can distinguish between the watermarked model and the original one. In other words, credibility measures how we can extract watermarked effectively. For calculating this measure, we query the model with documents in the trigger set. Table \ref{table:reliability} shows the accuracy of both watermarked and non-watermarked models to predict the class of trigger set items.
The watermark can not be obtained from the watermarked model when we exchange words with the high TF-IDF scores because the latent features extracted from DNN are correlated with these types of words. It shows that the words with high TF-IDF scores have a deniable role in extracting features. Therefore, we swap the words with the low TF-IDF value and Table \ref{table:reliability} presents the promising results.
\begin{table}[t]
\centering
\small
\caption{Credibility score for watermarked and non watermarked model.}
\label{table:reliability}
\begin{tabular}{|c|lll|}
\hline
& Accuracy & ASC & DES \\\cline{2-4}
\textbf{IMDB} & Original Model & 10.5\% & 8.4\% \\
& Watermarked Model & 98.0\% & 54.2\% \\ \hline \hline
& Accuracy & ASC & DES \\\cline{2-4}
\textbf{SpamHam} & Original Model & 12.8\% & 10.6\% \\
& Watermarked Model& 88.3\% & 53.7\% \\\hline
\end{tabular}
\end{table}
As Table \ref{table:reliability} shows, we embed a watermark into the model and extract it accurately in the Ascending strategy. In the IMDB dataset, the accuracy of the original model on the trigger set is 10.5$\%$, while the accuracy of the watermarked model is 98.0$\%$. The accuracy of the original model and watermarked model on the trigger set for the HamSpam dataset are 12.8$\%$ and 88.3$\%$, respectively. These results indicate that the proposed approach is credible and reliable in watermarking a DNN model.
\subsubsection{Robustness}
We use parameter pruning on the watermarked model for evaluating the Robustness of the watermarked model, which is trained on the IMDB dataset. In this stage, the pruning approach is applied to sparsify the proposed watermarked model. A specific threshold is defined for a percentage of weights that are going to be replaced by zero. As Figure \ref{Robustness_parameter_pruning} shows, train loss, validation loss, and the accuracy of the watermarked model slightly decrease, but this amount is not noticeable. Thus, we can claim that our watermarked model is robust against parameter pruning, and parameter pruning did not impair our approach's performance.
\begin{figure*}[b]
\begin{center}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=0.99\textwidth]{img/purring_loss_80.png}
\caption{Training and Validation loss}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=0.99\textwidth]{img/purring_accuracy_80.png}
\caption{Accuracy}
\end{subfigure}
\end{center}
\caption{Comparing training loss, validation loss and accuracy of watermarked model after applying parameter pruning. }
\label{Robustness_parameter_pruning}
\end{figure*}
\subsubsection{Efficiency}
The efficiency of the proposed method can be evaluated in each phase of our approach separately. Since watermark generation for a DNN is an offline process, so, it does not add any overhead to the prediction process of DNN. On the other hand, watermark extraction for the original and watermarked models are the same, and it takes time the same as simple querying to a DNN model. Therefore, evaluating the efficiency of the watermark embedding section is the most critical section.
In this stage, the proposed watermark embedding schema's efficiency is evaluated by comparing the execution time of each epoch before and after the embedding procedure. The experiments in this stage are conducted using a machine with an Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz CPU, 16 GB RAM and two Nvidia TITAN V 12 GB HBM2. Figure \ref{fig:Execution_time} illustrates the execution time comparison between the original model and the watermarked model. As Figure \ref{fig:Execution_time} shows embedding a watermark into a model slightly increases the execution time in contrast to the original model.
\begin{figure}[t]
\begin{center}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=0.99\textwidth]{img/execution_time_IMDB.png}
\caption{IMDB}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=0.99\textwidth]{img/execution_time_SpamHam.png}
\caption{SpamHam}
\end{subfigure}
\end{center}
\caption{Efficiency evaluation of watermark embedding in terms of execution time(s).}
\label{fig:Execution_time}
\end{figure}
\subsubsection{Security}
As we explain in Section \ref{Evaluation_Metrics}, security measurement shows how a watermarked model is robust against the brute force attack. Since the primary focus of this research is textual data, the watermark input space is discrete and infinite. Therefore, embedded watermarks should be secure against brute-force attacks and also it is hard to guess or predict them.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{img/Parametersetting.png}
\caption{model accuracy and watermark extraction accuracy for different values for the $K$.}
\label{fig:parametersetting}
\end{figure}
\begin{table}[t]
\centering
\footnotesize
\caption{The performance of watermarked DNN model on \textit{IMDB} dataset with different values of $k$ parameter in terms of training loss, validation loss, accuracy, precision, recall, and F1.}
\label{table:parameter_setting_IMDB}
\begin{tabular}{|p{1cm}|p{1.8cm}p{2cm}p{1cm}p{1cm}p{1cm}p{1cm}|}
\hline
\multirow{1}{*}{K} & Training loss & Validation loss& Accuracy & Precision & Recall & F1 \\
\rowcolor[HTML]{C0C0C0} \hline
50 & 0.014 & 0.396 & 0.92 & 0.938 & 0.893 & 0.915\\ \hline
60 & 0.01 & 0.311 & 0.931 & 0.93 & 0.932 & 0.931\\ \rowcolor[HTML]{C0C0C0}
70 & 0.016 & 0.345 & 0.919 & 0.917 & 0.917 & 0.917\\
80 & 0.011 & 0.372 & 0.92 & 0.905 & 0.94 & 0.922\\ \rowcolor[HTML]{C0C0C0}
90 & 0.011 & 0.417 & 0.918 & 0.905 & 0.931 & 0.926\\
100 & 0.009 & 0.345 & 0.923 & 0.929 & 0.918 & 0.923\\ \rowcolor[HTML]{C0C0C0}
110 & 0.009 & 0.389 & 0.92 & 0.931 & 0.905 & 0.918\\
120 & 0.013 & 0.344 & 0.922 & 0.926 & 0.907 & 0.916\\ \rowcolor[HTML]{C0C0C0}
130 & 0.011 & 0.445 & 0.911 & 0.935 & 0.882 & 0.907\\
\hline
\end{tabular}
\end{table}
In the watermark generation phase, $K$ words with the lowest TF-IDF score is selected to generate the trigger set. To experiment the effect of this parameter, we vary its value from 50 to 130. Table \ref{table:parameter_setting_IMDB} shows the performance of the watermarked DNN model on the IMDB dataset with different amounts of $K$ parameter in terms of training loss, validation loss, accuracy, precision, recall, and F1. The closest accuracy to the original model, which is $93.5\%$, is obtained when $K=60$. However, watermark extraction accuracy is another important factor in selecting the best value for this parameter. Figure \ref{fig:parametersetting} illustrates both model accuracy and watermark extraction accuracy for the proposed approach by considering different quantities of $K$. As obtained results show, the watermarked model with $K=80$ achieves the best performance regarding both model accuracy and watermark extraction accuracy.
\section{Conclusion \& Future Works}\label{Conclusion}
Since collecting labeled data and providing powerful hardware for training a DNN model is costly, most scientists preferred to utilize a pre-trained model for different problems. Thus, securing a trained model becomes an essential task. In this paper, we applied the digital watermarking concept into the textual DNN models. We proposed an approach to protect a textual DNN model against copyright infringement and unauthorized redistribution. The proposed method did not decrease the original tasks' performance and is robust against different well-known attacks, such as parameter pruning. As experimental results demonstrated, the watermark can be extracted from the watermarked model accurately, which means the ownership of a trained model can be verified precisely.
The following potential hypothesis can be considered as the future work of this paper:
\begin{itemize}
\item Analyzing the performance of the approach against all watermark attacks.
\item Generating a watermark with different methods
\item Comparing the performance and robustness of the proposed algorithm with all existing frameworks
\item Applying the proposed watermarking on various textual tasks such as phishing detection, machine translation and other textual deep neural network.
\end{itemize}
\bibliographystyle{ieeetr}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,339
|
\section{Introduction}
Let $G$ be a finite abelian group written additively.
We denote by $\exp(G)$ the {\it exponent} of $G$
that is the least common multiple of the orders of its elements.
Let $r$ be a multiple of $\exp(G)$.
The \emph{generalized Erd\H{o}s--Ginzburg--Ziv constant} $\mathsf{s}_r(G)$
is the smallest integer $s$
such that every sequence of length $s$ over $G$
has a zero-sum subsequence of length $r$.
If $r = \exp(G)$, then $\mathsf{s}(G)=\mathsf{s}_{\exp(G)}(G)$
is the classical Erd\H{o}s--Ginzburg--Ziv constant.
The constants $\mathsf{s}_r(G)$ have been studied extensively,
see for example
\cite{Bitz:2020,Gao:2003,Gao:2014,Gao:2006,Han:2018,Han:2019,He:2016,Sidorenko:2020}.
The following variation of these constants was introduced in \cite{Augspurger:2017}
and further studied in \cite{Berger:2019,Berger:2019b,Hu:2023}.
The \emph{modified Erd\H{o}s--Ginzburg--Ziv constant} $\mathsf{s}_r'(G)$
is the smallest integer $s$
such that every \emph{zero-sum} sequence of length $s$ over $G$
has a zero-sum subsequence of length $r$.
By the definition, $\mathsf{s}_r'(G) \leq \mathsf{s}_r(G)$.
On the other hand, if $g_1,g_2,\ldots,g_s$ is a sequence over $G$
that does not contain a zero-sum subsequence of size $r$,
and $s$ is mutually prime with $\exp(G)$,
then there exists $x \in G$ such that
$g_1+x,g_2+x,\ldots,g_s+x$ is a zero-sum subsequence
(see \cite{Augspurger:2017,Hu:2023}).
Thus, $\mathsf{s}_r'(G) \geq \mathsf{s}_r(G) - (\exp(G) - 1)$,
and if $\mathsf{s}_r(G)-1$ is mutually prime with $\exp(G)$,
then $\mathsf{s}_r'(G) = \mathsf{s}_r(G)$.
In this note, we consider the case $\exp(G)=2$, so $G\cong\mathbb{Z}_2^d$.
By the abovementioned argument,
\begin{equation}\label{eq:equal}
\mathsf{s}_r'(\mathbb{Z}_2^d) = \mathsf{s}_r(\mathbb{Z}_2^d) \;\;\;{\rm if}\;
\mathsf{s}_r(\mathbb{Z}_2^d) \;{\rm is\; even},
\end{equation}
and
\begin{equation}\label{eq:ineq}
\mathsf{s}_r(\mathbb{Z}_2^d)-1 \:\leq\: \mathsf{s}_r'(\mathbb{Z}_2^d) \:\leq\: \mathsf{s}_r(\mathbb{Z}_2^d).
\end{equation}
The exact values of generalized Erd\H{o}s--Ginzburg--Ziv constants
$\mathsf{s}_{2k}(\mathbb{Z}_2^d)$ have been found for $d \leq 2k+1$:
\begin{theorem}[\cite{Sidorenko:2020}]\label{th:T1}
\[
\mathsf{s}_{2k}(\mathbb{Z}_2^d) = \begin{cases}
2k+d \;\;\;{\rm for}\;\; d < 2k; \\
4k+1 \;\;\;{\rm for}\;\; d = 2k; \\
4k+2 \;\;\;{\rm for}\;\; d = 2k+1,\;\; k \;{\rm is\; even}; \\
4k+5 \;\;\;{\rm for}\;\; d = 2k+1,\;\; k \;{\rm is\; odd}.
\end{cases}
\]
\end{theorem}
In the present note, we extend this result to the \emph{modified}
Erd\H{o}s--Ginzburg--Ziv constants.
\begin{theorem}\label{th:T2}
Let $d \leq 2k+1$.
Then $\mathsf{s}_{2k}'(\mathbb{Z}_2^d) = \mathsf{s}_{2k}(\mathbb{Z}_2^d)-1$ in the following cases:
\begin{itemize}
\item
$d=2k-1$;
\item
$d=2k-3$, $k$ is even;
\item
$d \leq 2k-5$, $d$ is odd.
\end{itemize}
In all other cases, $\mathsf{s}_{2k}'(\mathbb{Z}_2^d) = \mathsf{s}_{2k}(\mathbb{Z}_2^d)$.
\end{theorem}
\begin{proof}[\bf{Proof}]
We start with the cases where we claim
$\mathsf{s}_{2k}'(\mathbb{Z}_2^d) = \mathsf{s}_{2k}(\mathbb{Z}_2^d)$.
Among them, cases $d < 2k$ with even $d$, and $d=2k+1$ with even $k$
follow from \cref{th:T1} and \cref{eq:equal}.
The other three cases are
$d=2k$, $d=2k+1$ with odd $k$, and $d=2k-3$ with odd $k$.
Since $\mathsf{s}_{2k}'(\mathbb{Z}_2^d) \leq \mathsf{s}_{2k}(\mathbb{Z}_2^d)$,
it is sufficient to construct
a zero-sum sequence of length $\mathsf{s}_{2k}(\mathbb{Z}_2^d)-1$
that does not contain a zero-sum subsequence of length $2k$.
For $d=2k$, we select a sequence of length $4k$ which consists of
$2k-1$ copies of the zero vector,
the $2k$ basis vectors $e_1,e_2,\ldots,e_{2k}$,
and the vector $e_1+e_2+\ldots+e_{2k}$.
For odd $k$ and $d=2k+1,2k-3$,
we select a sequence of length $2d+2$ which consists of
$0,\,e_1,\,e_2,\,\ldots,e_{d-1},\,e_1+e_2+\ldots+e_{d-1},\,
e_d,\,e_d+e_1,\,e_d+e_2,\,\ldots,e_d+e_{d-1},\,e_d+e_1+e_2+\ldots+e_{d-1}$.
To solve the three cases where we claim
$\mathsf{s}_{2k}'(\mathbb{Z}_2^d) = \mathsf{s}_{2k}(\mathbb{Z}_2^d)-1$,
in the light of \cref{eq:ineq}, it is sufficient to prove that
any zero-sum sequence of length $\mathsf{s}_{2k}(\mathbb{Z}_2^d)-1$ over $\mathbb{Z}_2^d$
contains a zero-sum subsequence of length $2k$.
First consider the case $d=2k-1$.
Let $x_2,x_3,\ldots,x_{4k-1} \in \mathbb{Z}_2^{2k-1}$
where $x_2+x_3+\ldots+x_{4k-1} = 0$.
Set $x_1=x_2$.
As $\mathsf{s}_{2k}(\mathbb{Z}_2^{2k-1}) = 4k-1$,
there is $A\subset\{1,2,\ldots,4k-1\}$ such that $|A|=2k$
and $\sum_{i \in A} x_i = 0$.
If $1 \notin A$, then we have found a zero-sum subsequence
of length $2k$ among $x_2,x_3,\ldots,x_{4k-1}$.
Suppose, $1 \in A$.
If $2 \notin A$, then $(A \backslash \{1\}) \cup \{2\}$
points to a zero-sum subsequence of length $2k$.
Suppose, $1,2 \in A$.
Set $B := (\{1,2,\ldots,4k-1\} \backslash A) \cup \{2\}$.
Then $|B|=2k$ and
\begin{align*}
\sum_{i \in B} x_i & = x_1+x_2+\ldots+x_{4k-1} - \sum_{i \in A} x_i + x_2
\\ & = x_1+x_2 + (x_2+\ldots+x_{4k-1}) - \sum_{i \in A} x_i = x_1+x_2 = 0.
\end{align*}
Finally, let $d$ be odd,
and $d \leq 2k-3$ if $k$ is even,
or $d \leq 2k-5$ if $k$ is odd.
We are going to show that
every zero-sum sequence of length $2k+d-1$ over $\mathbb{Z}_2^d$
contains a zero-sum subsequence of length $2k$.
Let $x_1,x_2,\ldots,x_{2k+d-1}\in\mathbb{Z}_2^d$
where $x_1+x_2+\ldots+x_{2k+d-1}=0$.
By \cref{th:T1},
$s_{d-1}(\mathbb{Z}_2^d)=2d$ if $d \equiv 1 \;{\rm mod}\; 4$, and
$s_{d-1}(\mathbb{Z}_2^d)=2d+3$ if $d \equiv 3 \;{\rm mod}\; 4$.
In both cases,
$s_{d-1}(\mathbb{Z}_2^d) \leq 2k+d-1$.
Thus, there is $A \subset \{1,2,\ldots,2k+d-1\}$ such that
$|A|=d-1$ and $\sum_{i \in A} x_i = 0$.
Set $B := \{1,2,\ldots,2k+d-1\} \backslash A$.
Then $|B|=2k$ and
$\sum_{i \in B} x_i = \sum_{i=1}^{2k+d-1} x_i - \sum_{i \in A} x_i = 0 - 0 = 0$,
so $B$ points to a zero-sum subsequence of length $2k$ within
$x_1,\ldots,x_{2k+d-1}$.
\end{proof}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,012
|
Samantha Jade releases her new album 'Best Of My Love'
A HOMAGE TO THE DISCO ERA + TWO GLITTERING
DISCO-INSPIRED ORIGINALS
BUY NOW: http://smarturl.it/SJ.BestOfMyLove
Australian pop darling Samantha Jade today releases her third studio album BEST OF MY LOVE through Sony Music Entertainment Australia. BEST OF MY LOVE is a shimmering homage to the disco era, filled with tributes to iconic greats such as Donna Summer, Gloria Gaynor, Cher, Diana Ross and many more. For a limited time, fans can also purchase an autographed copy of Jade's "COLLECTOR'S EDITION" via her web store now, which includes the CD alongside an exclusive full-colour photo book. As many contemporary artists alongside Samantha Jade can attest, disco kick-started the modern era of dance-based popular music. BEST OF MY LOVE journeys through some of the pinnacle moments in this era, from ABBA's 'Dancing Queen' (1976) to The Bee Gees' 'How Deep Is Your Love' (1977), to Gloria Gaynor's 'I Will Survive' (1978) and Donna Summer's 'Hot Stuff' (1979), as well as Diana Ross' 'Upside Down' and 'I'm Coming Out' (1980). Samantha Jade also pays homage to the ground-breaking songwriters and producers behind the hits, including Bernard Edwards and Nile Rodgers of Chic, Giorgio Moroder, and Clifton Davis. To learn more, watch the BEST OF MY LOVE album trailer here! BEST OF MY LOVE features brand new original material from Samantha Jade, including 'Roller Skates' and 'Let Me Love You'. 'Roller Skates' (from UK dance trio Knoxa) is an original glittering upbeat pop smash, borrowing influences from tropical house and disco genres. The disco throwback 'Let Me Love You', written by renowned song writing duo DNA Songs, was inspired by Australia's recent Equality Campaign. The album ends on a high with the dancefloor-ready 'Best of My Love (2018 Mix)'.
Samantha comments on the record, "BEST OF MY LOVE was such a fun record to make. I hope it reignites the passion for all disco lovers and helps tell the story of this exciting genre to a whole new audience that may not have experienced the lifestyle first-hand. It's such an honour to pay homage to iconic divas like Donna Summer, Cher and Diana Ross. I've always been inspired by disco, soul, funk, and dance, particularly the female voices that emerged during this time." Recently, Samantha Jade fans were recently wowed by her incredible Laneway performance set at this year's Sydney Gay & Lesbian Mardi Gras. Fans will have another opportunity to catch Samantha Jade in Perth and Sydney – details below.
1 Best of My Love
2 Never Can Say Goodbye
3 Upside Down
4 Dancing Queen
5 How Deep Is Your Love
6 I'm Coming Out
7 I Feel Love
8 Take Me Home
9 Hot Stuff
10 We Are Family
11 I Will Survive
12 Roller Skates
13 Let Me Love You
14 Best of My Love (2018 Mix)
WATCH THE BEST OF MY LOVE ALBUM TRAILER HERE SAMANTHA JADE – 'BEST OF MY LOVE' PERFORMANCES + SIGNINGS
FOR MORE INFORMATION, HEAD TO WWW.SAMANTHAJADEOFFICIAL.COM
APRIL 21 – 12PM WESTFIELD HURSTVILLE SYDNEY, NSW
APRIL 26 – 12:00PM GARDEN CITY SHOPPING CENTRE PERTH, WA APRIL 26 – 6:00PM WESTFIELD WHITFORD CITY PERTH, WA
CONNECT WITH SAMANTHA JADE https://www.samanthajadeofficial.com/
Amy Shark Releases 'I Said Hi'
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,912
|
We are so happy to share the news that Larry Mizell Jr. is our new Digital Media Specialist for the Office of Arts & Culture! You may know Larry from his "My Philosophy" column in the The Stranger, and his Sunday night hip-hop show on KEXP. In addition he has worked social media for a number of musicians and projects, directed music videos and more. He's going to bring a great skill set to our Office, and we're thrilled to welcome him to the team.
In this position, Larry will promote and highlight the public benefit of the artistic expression found in the City of Seattle through digital projects. He has been advocating for Seattle arts for years and been instrumental in cultivating community online, founding popular local music forums and blogs. He is probably always listening to and thinking about music—but when not online, writing or DJing, he enjoys food, travel, being near water, and playing with his dog.
The Office of Arts & Culture has some exciting staff updates!
We are re-organizing how the public art team is structured, and shifting some roles and responsibilities. Jason Huff is now working as our Project Management lead, supervising the work of the project managers and working on project direction and development. Deborah Paine is the lead for the Collections Management team and is overseeing collections and conservation.
Calandra Childers is also assuming a newly created Deputy Director position. This position is necessary to meet the extended work we have taken on, and take advantage of the opportunities we have available to us. Calandra is being promoted from Communications Manager into this role, and as such will be taking on policy work and relationship building for the Office.
"Calandra is the right person for this role because she has a comprehensive understanding of the work of the office across teams and she has the ability to juggle and be responsive to a diverse set of internal and external requests," said Randy Engstrom, director.
"I'm thrilled to be stepping into this role – I can't imagine a more exciting place to be working than at the junction of arts and culture and policy. Artists and creative workers give Seattle its vibrancy and we must support their presence in order to support our city," said Childers.
Congrats to all staff members!
Sadly, Seattle lost another art icon this week with the death of Rolon Bert Garner; he was 75. Garner attended the Museum Art School in Portland and was founder and visual arts director for the Seattle Bumbershoot summer arts festival and for many years was arts director of the historic Two Bells Tavern in Seattle. He was also a former curator with the Seattle Art Museum and co-founder of Art Tech. Garner served as a commissioner on the Washington State Arts Commission and worked with the Seattle Arts Commission. He also taught in western Washington and Oregon.
IMAGES: Rolon Bert Garner and Ken Leback, Equality, 1996, granite, bronze, poured concrete. Located at Sturgus Avenue South, east of the 12th Ave South bridge and Pacific Health Hospital, northeastern edge of Beacon Hill.
A cyclists zips through the Burke-Gilman Trail's cavernous "Ebb and Flow" mural, located in Bothell, under 96th Avenue NE.
We are thrilled to learn that our own Kristen Ramirez has been honored with an Americans for the Arts Year in Review award for her Wayne Tunnel project on the Burke-Gilman Trail, completed under the guidance of 4Culture. Ramirez, a Public Art Project Manager in the Office of Arts & Culture, is also an artist. Ramirez activated the Wayne Tunnel with a site responsive mural, Ebb & Flow. Transforming this gateway between communities and natural landmarks with an immersive experience of color and light, Ebb & Flow combines blasts of bright yellow, orange, pink, and purple, representing the flora and the fauna of the region. The tunnel's own architecture is used to make a playful kaleidoscope for trail users to enjoy.
Prior to joining the Office of Arts & Culture, Kristen worked at Cornish College of the Arts in Seattle in a variety of capacities including: Art Faculty, Manager of Summer & External Programs, and Manager of Academic and Community Engagement. Kristen has also taught at the University of Washington, Tacoma Museum of Glass, Pratt Fine Arts Center, Edmonds Community College, and Path with Art, a non-profit that serves adults in recovery. In addition to teaching, Kristen is an artist, whose artwork explores many media, including printmaking, drawing, painting, installation and public art. Her work is often about place, conjuring an affection for disorienting urban/suburban places by appropriating signs and symbols of commerce. Her studio practice takes her increasingly into the public realm through community-based projects and murals. Ramirez earned a BA from UC Santa Cruz, a MA in Education and California Teaching Credential from San Francisco State University, and a MFA in Printmaking from the University of Washington.
Duwamish Revealed is a series of outdoor art installations, performances, community activities and other adventures to celebrate the Duwamish River. Supported in part by the Office of Arts & Culture, the celebration kicks off this Friday, June 5 from 7 to 10 p.m. at The Estutuary, 4651 Diagonal Ave. S.
Seattle Parks and Recreation in partnership with the Langston Hughes Performing Arts Institute invites you to "Bearing Witness," a showcase featuring performances by LGBTQ youth of color and their friends, on Wednesday, June 10 at 7 p.m. (doors open at 6:45 p.m.). Langston Hughes Performing Art Institute is located at 104 17th Ave. S at E Yesler Way in Seattle. Attendance is free, and the show is about 90 minutes.
Mayor Ed Murray and Superintendent Larry Nyland pose with Kindergartners and 1st graders at Leschi Elementary School.
In the first year of implementation of The Creative Advantage arts education access initiative in the Central region, the arts access gap was closed. What does that mean? In 2011, we conducted a needs assessment around the state of arts education in Seattle Public Schools. The assessment found inconsistent access to arts education, especially for students qualifying for free and reduced lunch, students eligible for the transitional bilingual program, and students identifying as Black, Hispanic, or American Indian/Alaska Native.
We launched the Creative Advantage in one area of the School District that demonstrated particularly challenging numbers, the Central District. After just one year, the Central District now shows access hours on par with the rest of the district – closing the access gap for those students. In all, nearly 1700 students attended music classes that would not have been available before the Creative Advantage roll-out. Additionally, more students in the Central District are reaching standards in the arts, and there is an increased awareness and conversation around issues of social justice, tied to this initiative.
Bringing the Central District up to par with the rest of the school district is significant, but it's also important to note that The Creative Advantage aims to raise total instructional time in the arts for all students across the district. In other words, it's great that we closed this access gap, but our young people deserve even more.
If you'd like to read the full report, you can download it here; http://www.creativeadvantageseattle.org/go-deeper/.
We're so proud of these wins for students, and encouraged by this demonstration of impact. It's also a great time to announce The Creative Advantage is expanding! In addition to the 13 schools served by the Creative Advantage in the Central District, The Creative Advantage will roll-out in Southwest Seattle in 2015, serving Arbor Heights, Concord International, Highland Park, Roxhill, Sanislo, West Seattle Elementary, K-5 STEM, Denny International Middle School, Chief Sealth International High School and Middle College at High Point.
Professional development and planning is starting now, and next school year will see new programs in Southwest Seattle focused on arts access. Stay tuned for updates throughout the years.
The forum was designed to give a glimpse at what some of the impacts are when different cultural groups are represented in ways that are stereotypical, not authentic or even completely invisible, and practical suggestions for how to be more inclusive in the work that we do. While potentially divisive, "The Mikado" controversy became an opportunity for our community to learn from one another. We wanted to create a safe space where a diversity of views could be shared, to inspire more people to get a little less uncomfortable with doing the work of dismantling racism. We chose to make the focus of the Artistic Freedom & Artistic Responsibility forum broader than just a single production because it's the underlying politics of why the controversy happened that need to be addressed. It's not just one show or one company or one incident that's at issue. It's far broader, deeper and more pervasive than that, as we can see playing out in different ways all over the country.
Huge thanks to everyone wh o worked to make the forum happen and for all who took the time to attend or view the live-stream video. Most importantly, we're so thrilled at the after-response. People have been signing up for trainings, supporting events that feature greater diversity, blogging, organizing follow-up events, asking for advice, and most importantly talking with each other and via social media about next steps for creating change around how race is represented in the arts. And the thing that excites me the most? The diversity of the people who have been doing all of the above. Because sometimes people attend these things and feel that just by attending they've done their share. But this time, I'm seeing a demographic shift in who's been following up – I'm seeing a lot more non-people of color wanting to know what they can do next.
So what can we do to ensure th at change does happen? Take a moment to understand why. Most arts and cultural organizations want more people attending and participating in the work we do. Many would love to reach a greater diversity of people. But in order for others to want to come to us, we have to be truly welcoming and inclusive. And in order to come across that way, we have to look internally first. How do we come across to the people we want to attract? Who is reflected in the work we do, in our programming, on our staff and board, and with the artists we use? If people don't see themselves represented, they're not going to feel welcome.
Accept that it may feel uncomfortable. This work isn't easy. And it will take time.
Ask and listen. It's okay to ask for help, but be willing to listen with the goal of understanding where the other person is coming from.
Sign up for training. These are also great opportunities to connect with others working through similar issues.
Make time to talk with staff at all levels, with your board, with your artists, with your audience – to get their ideas on how to be more inclusive and why it's important to do so.
Commit to making the changes necessary.
Now let's all work together to carry this momentum forward. Here are some opportunities and resources to start doing just exactly that.
In the next several weeks, we will celebrate our region's museums and cultural institutions in conjunction with the American Alliance of Museums annual conference in Seattle. More than 55 organizations have created special programming and admission offers – you can check it out here, at www.museumweeknw.com. In addition to special offers around the region, the Office of Arts & Culture has created a number of free tours of Seattle's noted public art collection for the public to enjoy. We hope you will join us for a glimpse into some of the art that makes our city so special. Tours will be approximately an hour in length, and cover the following themes.
Events range from May 16-23 – check out the website for a detailed list of all the activities you can enjoy. See you on a tour!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,777
|
One of the most discussed tax issues in 2016 was the proposed IRC Section 2704 regulations released in early August 2016 that are designed to limit estate valuation discounts for minority interests. The proposed rules are broad in scope, and would apply to family-owned operating businesses. Family-owned manufacturing companies may be among the most harmfully impacted by the proposed IRS rules.
Transfers of closely held businesses among family members are typically minority interests. There are two types of discounts commonly applied to a business's overall estimated value to reflect elements of control and marketability: (i) the discount for lack of control, and (ii) the discount for lack of marketability.
The economic logic behind the discount for lack of control is that a minority interest shareholder does not have the ability to manage or control the company, such as setting operational and strategic policies, declaring dividends, liquidating or merging the company, or acquiring business assets. As a result of this inability to control the operations of the business, a discount from the business value is necessary. Appraisers commonly rely on statistics from publicly announced mergers, acquisitions, and divestitures involving operating entities to determine an appropriate discount for lack of control.
Family owned businesses do not trade freely on the open market like a stock on the New York Stock Exchange. Without access to the public markets, a non-controlling shareholder does not have control to time potential gains or avoid losses. A publicly traded stock can be traded almost instantaneously. In contrast, the sale of an interest in a family owned business is risky, difficult, and costly. As a result, the lack of marketability requires a discount adjustment to the value determination. Appraisers will typically rely on empirical studies to determine the appropriate adjustment for lack of marketability.
Despite the real world difficulties of selling a non-controlling interest in a family owned business and the economic logic behind the application of discounts, the proposed regulations, as written, would severely limit the use of such discounts for transfers among family members of minority interests in their business.
For 2016, the federal estate and gift tax exclusion amount is $5.45 million for an individual and $10.9 million for a married couple. Based on estimates provided to Bloomberg BNA, the vast majority of manufacturing businesses are likely to be impacted by the valuation discount rules.
Take as an example a manufacturing operation-started by a husband and wife-that was recently valued at $20 million. The husband gifted 35% of the business to a trust and his wife gifted another 35% to a separate trust with the objective to have future appreciation of the business out of their estates. The indicated value of the two gifts before discounts is $14 million [(35% + 35%) x $20 million]. However, because they are gifting minority interests under the current tax law, the value of their interests in the family business are worth less than the pro-rata value of the business. Assume that an appraiser determined a 15% discount for lack of control and a 15% discount for lack of marketability is appropriate. Each trust gift is thus estimated to be worth $5,057,500.
The gifts would use up $10,115,000 of the couple's gift tax exclusion under the current regulations, leaving room for an additional $785,000 in tax exclusion. However, under the proposed regulations, the discounts would be disallowed and the combined gifts would be valued at $14 million. Under this scenario, the full $10.9 million exemption would be used up, and the couple would owe federal tax on $3,100,000 at estate and gift tax rates of 40% for 2016. The additional tax burden on the shareholders of this manufacturing operation may reduce their ability to expand their operations, potentially resulting in job cuts.
We cannot predict the final outcome of the proposed regulations. A public hearing was held on December 1, 2016, and there was a record turnout by business valuation professionals, trust and estate attorneys, and family business owners. Those speaking against the proposed regulations argued that the rules are so broad and complicated that they should be permanently withdrawn or revised. However, as of this publication, the IRS has yet to make an announcement on the fate of the proposed regulations, so they could go final as-is.
BlumShapiro's manufacturing & distribution industry group consists of more than 80 trusted advisors who understand your business and your business goals. If you wish to transfer your family business within the family, contact us soon in order to take advantage of the valuation discounts under the current regulations.
Allyson Versprille, "Family Businesses, Jobs Seen Thwarted by Proposed IRS Rules," Bloomberg BNA Daily Tax Report, August 22, 2016.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,664
|
{"url":"https:\/\/www.rdocumentation.org\/packages\/igraph\/versions\/1.2.2\/topics\/convex_hull","text":"# convex_hull\n\n0th\n\nPercentile\n\n##### Convex hull of a set of vertices\n\nCalculate the convex hull of a set of points, i.e. the covering polygon that has the smallest area.\n\nKeywords\ngraphs\n##### Usage\nconvex_hull(data)\n##### Arguments\ndata\n\nThe data points, a numeric matrix with two columns.\n\n##### Value\n\nA named list with components:\n\nresverts\n\nThe indices of the input vertices that constritute the convex hull.\n\nrescoords\n\nThe coordinates of the corners of the convex hull.\n\n##### References\n\nThomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0262032937. Pages 949-955 of section 33.3: Finding the convex hull.\n\n\u2022 convex_hull\n\u2022 convex.hull\n##### Examples\n# NOT RUN {\nM <- cbind( runif(100), runif(100) )\nconvex_hull(M)\n# }\n\nDocumentation reproduced from package igraph, version 1.2.2, License: GPL (>= 2)\n\n### Community examples\n\nLooks like there are no examples yet.","date":"2020-12-04 23:43:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.17088960111141205, \"perplexity\": 5968.499314546499}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141745780.85\/warc\/CC-MAIN-20201204223450-20201205013450-00354.warc.gz\"}"}
| null | null |
London guide
History of London
Top Tourist Attractions
London Galleries
London districts & neighbourhoods
Getting around London
Domestic and Child Care
HR and Personnel
IT and New Media
London Bars & Pubs
London Nightclubs
Best Mortgage Rates
First 4 London > Community
Local Community Web sites
Your London - Guide to London's public and community services funded and supported by the London authorities, the Mayor of London and the Office of the Deputy Prime Minister. The site features a master services directory allowing you to browse or search for public services from the 33 London Boroughs, an online directory of information about London's community and voluntary sector organisations, a "find your nearest" tool that enables you to locate services such as your nearest cash machine or post-office, linking seamlessly to an online journey planner, news feeds and live traffic information.
© First 4 London Ltd. All rights reserved. The London Business Directory.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,593
|
Q: Virtual Method logic not working C# .NET 4.0 I'm working through an example in the bookPro C# and the .NET Platform and I'm making a mistake somewhere that I can't see. The program compiles and runs, but the Manager object in this example isn't having the right value of 'StockOptions' returned. In an effort of concision, I'm going to try to only post the relevant code because this example is all about class hierarchies and there's like six different classes. The virtual method GiveBonus in the Employee class isn't being correctly overridden in the Manager class.
class Manager : Employee
{
private int numberOfOpts;
//the properties are inherited from Employee
public int StockOptions { get; set; }
//***METHODS*** this is returns the StockOptions amount as it is in the
// constructor, there's no logic being applied
public override void GiveBonus(float amount)
{
base.GiveBonus(amount);
Random r = new Random();
numberOfOpts += r.Next(500);
}
public override void DisplayStats()
{
base.DisplayStats();
Console.WriteLine("you have {0} stock options", StockOptions);
}
public Manager() { }
public Manager(string fullName, int age, int empID, float currPay,
string ssn, int numbofOpts) : base(fullName, age, empID, currPay, ssn)
{
ID = empID;
Age = age;
Name = fullName;
Pay = currPay;
StockOptions = numbofOpts;
}
}
snippet from my Main() method
Manager chucky = new Manager("chucky", 50, 92, 100000, "333-33-3333", 9000);
chucky.GiveBonus(300);
chucky.DisplayStats();
Console.WriteLine();
I made a mistake while asking the question. What I should have asked is why I have to use
Console.WriteLine("you have {0} stock options", numbOfOpts);
instead of
Console.WriteLine("you have {0} stock options", StockOptions);
A: It's not meant to add a random number to 9000 - it's meant to give a random number of stock options as well as the "base" pay bonus:
public override void GiveBonus(float amount)
{
base.GiveBonus(amount);
Random r = new Random();
// Note numberOfOpts, not currPay
numberOfOpts += r.Next(500);
}
Unfortunately, as we've got two separate fields - one created by an automatically implemented property - it won't actually update the value of StockOptions... it's not clear whether this is due to your editing, or whether it's a mistake in the book. (There are various other things I dislike about this code, but hey...)
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,288
|
és una ceramista veneçolana d'origen sefardita. Va cursar estudis d'escultura amb Ernest Maragall i de ceràmica amb Miguel Arroyo.
Biografia
Herrera va ser promotora dels estudis de ceràmica a Veneçuela, junt amb de Miguel Arroyo i Sergio González a l'Escola d'Arts Plàstiques Cristóbal Rojas durant més de 25 anys, i va fundar i va dirigir el taller de ceràmica de l'Institut de disseny de la Fundació Neumann.
El seu treball sempre es va mantenir entre tres línies paral·leles: la ceràmica utilitària, l'escultòrica i el dibuix amb pastissos sobre làmines de ceràmica. El 1991, la Galeria d'Art Nacional de Caracas va exposar Rostros y perfiles del barro (Rostres i perfils del fang), una mostra antològica de tota la seva trajectòria.
Entre les seves exposicions col·lectives es destaca l'Exposició de ceràmica contemporània (Istanbul, Turquia, 1969), on va obtenir Diploma d'Honor a la Qualitat.
Entre 1970 i 1976 va formar part de les col·lectives d'arts de foc organitzades per la Sala Mendoza.
El 1972 exhibeix una selecció del seu treball al Victoria and Albert Museum de Londres.
El 1987 exposa junt amb Maruja Herrera a la Galeria Barro y Fuego (Galeria Fang i Foc) a Caracas.
Així mateix, participa a la I Biennal Nacional d'Arts Plàstiques (Museu d'Art Contemporani de Caracas, 1988), la II Biennal Barro de América (Centre d'Art de Maracaibo Lía Bermúdez), Diez presencias. Las artes del fuego en Venezuela (Deu presències. Les arts de foc a Veneçuela, a la Galeria d'Art Nacional, 1995), Europa y Venezuela. Vínculo cerámico ( Europa i Veneçuela. Vincle ceràmic ; exposició itinerant organitzada pel Museu d'Art Contemporani de Caracas, 1996), México, Puerto Rico, Venezuela. Intercambio 3. Cerámica en pequeño formato (Mèxic, Puerto Rico, Veneçuela. Intercanvi 3. Ceràmica en petit format , al Centre Cultural Alfa, Monterrey, Mèxic, 1997, i al Museu Jacobo Borges, 1998), Pluralidad y oficio. Cerámica contemporánea venezolana, Colección Banco Mercantil (Pluralitat i ofici. Ceràmica contemporània veneçolana, Col·lecció Banc Mercantil, a la Sala Mendoza, 2000).
Reconeixements
Va ser mereixedora de la Menció Honorífica al Saló Oficial Anual d'Art Veneçolà, al Museu de Belles Arts, el 1960. També va rebre el Premi Nacional d'Arts Aplicades del XXVII Saló Oficial Anual d'Art Veneçolà el 1966, el Premi Nacional d'Arts Aplicades del XXV Saló Arturo Michelena, i el Diploma d'Honor a la qualitat en l'Exposició Internacional de Ceràmica Contemporània a Istanbul, Turquia el 1978
Referències
Bibliografia
Artistes veneçolans
Sefardites
Ceramistes
Ceramistes americans
Persones de Tetuan
Morts a Caracas
Artistes marroquins
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,261
|
<?php
//An instance of this class is automatically called by Eliya when a 401 error is thrown
class Error_401
{
public function __construct(Eliya\Response $response)
{
$response->set(
Eliya\Tpl::get('errors', [
'error_number' => 401,
'message' => $response->error()
])
);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,745
|
{"url":"https:\/\/argoshare.is.ed.ac.uk\/healthyr_book\/ms-word-via-knitrr-markdown.html","text":"## 13.6 MS Word via knitr\/R Markdown\n\nWhen moving from a .R file to a Markdown (.Rmd) file, environment objects such as tables or data frames \/ tibbles usually require to be saved and loaded to R Markdown document.\n\n# Save objects for knitr\/markdown\nsave(table1, table2, dependent, explanatory,\nfile = here::here(\"data\", \"out.rda\"))\n\nIn RStudio, select:\nFile > New File > R Markdown\n\nA useful template file is produced by default. Try hitting knit to Word on the Knit button at the top of the .Rmd script window. If you have difficulties at this stage, refer to Chapter 12.\n\nNow paste this into the file (we\u2019ll call it Example 1):\n\n---\ntitle: \"Example knitr\/R Markdown document\"\ndate: \"22\/5\/2020\"\noutput:\nword_document: default\n---\n\n{r setup, include=FALSE}\n# Load data into global environment.\nlibrary(finalfit)\nlibrary(dplyr)\nlibrary(knitr)\n\n\n## Table 1 - Demographics\n{r table1, echo = FALSE}\nkable(table1, row.names=FALSE, align=c(\"l\", \"l\", \"r\", \"r\", \"r\", \"r\"))\n\n\n## Table 2 - Association between tumour factors and 5 year mortality\n{r table2, echo = FALSE}\nkable(table2, row.names=FALSE, align=c(\"l\", \"l\", \"r\", \"r\", \"r\", \"r\"))\n\n\n## Figure 1 - Association between tumour factors and 5 year mortality\n{r figure1, echo = FALSE}\nexplanatory = c( \"differ.factor\", \"age\", \"sex.factor\",\n\"extent.factor\", \"obstruct.factor\",\n\"nodes\")\ndependent = \"mort_5yr\"\ncolon_s %>%\nor_plot(dependent, explanatory)\n\n\nKnitting this into a Word document results in Figure 13.2A), which looks pretty decent but some of the columns need some formatting and the plot needs resized. Do not be tempted to do this by hand directly in the Word document.\n\nYes, before Markdown, we would have to move and format each table and figure directly in Word, and we would repeat this every time something changed. Turns out some patient records were duplicated and you have to remove them before repeating the analysis over again. Or your colleague forgot to attach an extra file with 10 more patients.\n\nNo problem, you update the dataset, re-run the script that created the tables and hit Knit in the R Markdown document. No more mindless re-doing for you. We think this is pretty amazing.\n\n### 13.6.1 Figure quality in Word output\n\nIf your plots are looking a bit grainy in Word, include this in your setup chunk for high quality:\n\nknitr::opts_chunk\\$set(dpi = 300) \n\nThe setup chunk is the one that starts with {r setup, include = FALSE}` and is generated automatically when you create a new R Markdown document in RStudio.","date":"2023-03-26 05:22:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.22903333604335785, \"perplexity\": 8055.942703476204}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296945433.92\/warc\/CC-MAIN-20230326044821-20230326074821-00178.warc.gz\"}"}
| null | null |
\section{\label{sec:Introduction}Introduction}
Impurity atoms and ions embedded in a solid or a liquid can modify the phonon spectrum of the host substance.
In some cases, they lead to the appearance of so-called pseudolocal phonon modes, \textit{i.e.} phonon wavepackets strongly localized around the impurity center.
Liquid and solid He which is a quantum fluid/solid supports a particular type of pseudolocal modes associated with a peculiar structure of the impurity defects known as atomic bubbles (for a review see \cite{TabbertJLTP1997,MoroshkinPR2008}).
These bubbles have a typical diameter of $\approx$1 nm and are formed around neutral impurity atoms, such as alkali and alkali-earth metals due to the strong repulsion between He atoms and the electronic shells of the impurity.
Similar structures are produced by free electrons (electron bubbles) \cite{CelliPR1968} and by some molecular dopants, such as He$_{2}^{\ast}$ excimer \cite{BenderskiiJCP2002}.
The hydrodynamic model of the atomic bubble leads to the eigenmodes of the bubble interface described by the spherical harmonics $Y_{L,m}(\theta,\varphi)$.
At the same time, these vibrations can be represented as localized phonon wavepackets or pseudolocal modes since their eigenfrequencies overlap with the phonon spectrum of liquid and solid He \cite{MoroshkinEPL2011}.
Besides the direct time-resolved measurements \cite{BenderskiiJCP2002} and numeric calculations \cite{ElorantaCPL2004,ElorantaCP2007}, the oscillations of the atomic bubbles can be investigated spectroscopically, by observing the phonon structure in the absorption and emission spectra of the impurities \cite{MoroshkinEPL2011,MoroshkinPRB2018}.
The electronic transitions of the impurity atom that are accompanied by the excitation of the bubble vibrations result in the formation of phonon wings (PW) in the impurity spectra.
The transitions that excite no phonons contribute to a zero-phonon line (ZPL).
The latter is much narrower than PW.
It can be broadened only by the processes that leave the number of phonons unchanged, e.g. by scattering of the phonons already existing in the matrix.
Well-resolved ZPL and PW structures have been obtained also in the spectra of molecular dopants in superfluid He nanodroplets (for a review see \cite{ToenniesACIE2004,CallegariErnstBook2011}).
However, molecular species possess additional degrees of freedom and a strongly anisotropic interaction with surrounding He atoms, which result in a complicated spectra.
The molecular dopants in the droplets either reside at the surface \cite{HigginsJPCA1998} or produce a non-spherical trapping site inside the droplet that may result in a splitting of the ZPL and the appearance of new spectroscopic features \cite{LindingerPCCP2001,HartmannPCCP2002}.
An additional spectroscopic structure is produced by the molecular rotation \cite{PoertnerJCP2002}.
The size distribution of the droplets leads to the inhomogeneous broadening of ZPL \cite{SlenczkaJCP2001}.
Another limitation of the experiments on He droplets is due to the evaporative cooling of the droplet which leads to the droplet temperature $T$ = 0.37 K.
It is thus impossible to vary the helium temperature and pressure and observe their effect on the impurity spectra.
Spherical atomic bubbles in bulk superfluid He can be described in a frame of a relatively simple hydrodynamic model and are suitable for systematic studies of the bubble-phonon interaction.
Zero-phonon lines in bulk liquid and solid He could only be observed in the spectra of inner-shell transitions of the impurity atoms \cite{IshikawaPRB1997,HuiJLTP2000,MoroshkinPRA2011,MoroshkinJCP2013,MoroshkinPRB2018}.
Transitions of valence electrons typically induce a large displacement of the bubble interface and generate a classical wavepacket of a large number of phonons.
As discussed in \cite{MoroshkinEPL2011}, the corresponding spectra representing a multiphonon PW are strongly broadened and shifted and have no ZPL.
The existing studies of the inner-shell transitions could not resolve the intrinsic spectral width of ZPL due to the power broadening \cite{HuiJLTP2000,MoroshkinPRB2018} and the insufficient spectroscopic resolution \cite{IshikawaPRB1997,MoroshkinPRA2011,MoroshkinJCP2013}.
Recently, we have presented an experimental study \cite{MoroshkinPRB2018} of the spectra of Dy atoms in bulk superfluid $^{4}$He, in particular the profile of the phonon wing associated with the $4f^{10}6s^{2}$ $^{5}I_{8}$ - $4f^{9}5d6s^{2}$ $^{5}K_{7}$ inner-shell transition.
Our results suggest that the spectrum of elementary excitations in the vicinity of the atomic bubble is modified with respect to that in pure bulk superfluid He.
Here we present an extension of that study with a special emphasis on the zero-phonon line corresponding to the same electronic transition.
We resolve the intrinsic ZPL spectral width and study its dependence on the liquid He temperature.
The paper is organized as follows: in Sec. \ref{sec:Experiment} we describe our experimental setup and the measurements.
In Sec \ref{sec:Discussion} we discuss our results and compare them to the predictions of the atomic bubble model.
Sec. \ref{sec:Conclusion} gives a summary and conclusions.
\section{\label{sec:Experiment}Experiment}
\subsection{\label{sec:Setup}Experimental setup}
The experimental setup is described in our recent publication \cite{MoroshkinPRB2018}.
The experiments are carried out in an optical helium-bath cryostat cooled to 1.35--2.1 K by pumping on the helium bath.
The top view of the cryostat with the sample cell and the optical setup is shown in Fig. \ref{fig:Setup}.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig1.eps}
\caption{Experimental setup. 1 - cryostat, 2 - sample cell, 3 - ablation target, 4 - pulsed DPSS laser ($\lambda$ = 355 nm), 5 - motorized XY translation stage, 6 - frequency-doubled pulsed Nd:YAG laser ($\lambda$ = 532 nm), or frequency-tripled Nd:YAG laser ($\lambda$ = 355 nm), 7 - cw DPSS laser ($\lambda$ = 532 nm), 8 - cw tunable Ti:Sapphire laser, 9 - second harmonic generator (SHG), 10 - dichroic mirror, 11- wavelength meter, 12 - scanning confocal Fabry-Perot etalon, 13 - oscilloscope, 14 - laser-induced fluorescence, 15 - grating spectrograph, 16 - CCD camera, 17 - PMT, 18 - video camera, PD1 and PD2 - photodiodes.} \label{fig:Setup}
\end{figure}
Superfluid He in the sample cell is doped with dysprosium atoms by means of laser ablation using two nanosecond pulsed lasers.
The primary ablation of a metallic Dy target (3 in Fig. \ref{fig:Setup}) by a frequency-tripled DPSS laser ($\lambda = 355$ nm) produces mostly metal clusters and nanoparticles.
Dy atoms are produced by the secondary ablation/sputtering of these nanoparticles by another, more powerful pulsed laser (6 in Fig. \ref{fig:Setup}).
The primary ablation laser (4 in Fig. \ref{fig:Setup}) has a repetition rate of 20--50 Hz and a pulse energy of 70 $\mu$J.
It is focused on the target by a $f$ = 15 cm lens mounted on a motorized XY translation stage (5 in Fig. \ref{fig:Setup}) that is moving in a plane orthogonal to the laser beam.
In this way we move the ablation spot along the target surface thus avoiding drilling of a crater.
For the secondary sputtering we use either a frequency-doubled Nd:YAG laser ($\lambda = 532$ nm) with a repetition rate of 10 Hz and a pulse energy of 0.5--15 mJ, or a frequency-tripled Nd:YAG laser ($\lambda = 355$ nm) with a repetition rate of 20 Hz and a pulse energy of 1.5--6.5 mJ.
It is focused in the middle of the sample cell, above the Dy target.
The ablation process is monitored with a fast digital video camera (18 in Fig. \ref{fig:Setup}) oriented orthogonal to the laser beams and operated at a frame rate of 500--8500 fps.
Dy atoms in liquid He are excited by a second harmonic of a tunable cw Ti:Sapphire laser (8 in Fig. \ref{fig:Setup}) superimposed on the secondary sputtering laser beam using a dichroic mirror.
The laser is tuned into the resonance with the transition from the $4f^{10}6s^{2}$ $^{5}I_{8}$ ground state of Dy towards the state $4f^{9}5d6s^{2}$ $^{5}K_{7}$ at $\lambda=458.9$ nm.
The fundamental wavelength of the Ti:Sapphire laser is measured by a wavelength meter (11 in Fig. \ref{fig:Setup}) with an absolute accuracy of $\pm3.5\times10^{-4}$ nm (0.5 GHz).
The second harmonic linewidth measured by a Fabry-Perot etalon (12 in Fig. \ref{fig:Setup}) does not exceed 300 MHz.
Laser-induced fluorescence is collected at a right angle with respect to the laser beams and is analyzed with a grating spectrograph (15 in Fig. \ref{fig:Setup}) equipped with a CCD camera and a photomultiplier tube (PMT).
\subsection{\label{sec:Results}Experimental results}
The spectrum of the laser-induced fluorescence has been investigated in details in \cite{MoroshkinPRB2018}.
In total we observe 7 spectral lines originating from the electronic states of Dy lying below the laser-excited $4f^{9}5d6s^{2}$ $^{5}K_{7}$ state.
The emission spectrum is dominated by a strong line at 641 nm ($\lambda_{free}$ = 642.4 nm) which originates from the state $(^{5}I_{8})(^{3}P_{0})$, the lowest in the group of $4f^{10}6s6p$ $(^{5}I_{8})(^{3}P_{J})$ states.
The excitation spectrum was obtained by tuning the wavelength of the Ti:Sapphire laser and recording the fluorescence yield of this strongest emission line.
In the series of measurements reported in \cite{MoroshkinPRB2018} we changed the laser frequency in steps of 3--10 GHz and recorded the fluorescence spectrum at each step using a CCD camera.
In this way we have covered the whole excitation spectrum of the $4f^{10}6s^{2}$ $^{5}I_{8}$ - $4f^{9}5d6s^{2}$ $^{5}K_{7}$ transition that consists of a sharp zero-phonon line and a broader phonon wing.
We have demonstrated \cite{MoroshkinPRB2018} that the phonon wing is blueshifted with respect to ZPL by approximately 170 GHz, with a characteristic gap between ZPL and PW arising due to the peculiar structure of the spectrum of elementary excitations (phonons and rotons) in superfluid He.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig2.eps}
\caption{(a) High-resolution scan of ZPL in the excitation spectrum. $T$ = 1.5 K. Dots - experimental data, solid red line - fitted Lorentzian. (b) Temperature dependence of the ZPL spectral width (FWHM). Dots - experimental data, solid red line - fit according to Eq. (\ref{eq:FitPower7}), solid green line - fit according to Eq. (\ref{eq:FitArrhenius}).} \label{fig:ZPLspec}
\end{figure}
In the new series of experiments we concentrate on measuring the lineshape of the zero-phonon line with a higher resolution.
The frequency of the Ti:Sapphire laser was tuned continuously at a rate of 1-2 GHz/s and the time-resolved fluorescence signal at 641 nm was recorded by a photomultiplier tube mounted behind the exit slit of the spectrograph.
The resulting excitation spectrum was averaged over a large number of frequency sweeps in order to suppress the fluctuations of the fluorescence yield due to the variations of the Dy atomic density.
It was also corrected for the variations of the Ti:Sapphire power during the sweep which was recorded in parallel by a photodiode (PD1 in Fig. \ref{fig:Setup}).
The linearity of the sweep and its amplitude was controlled by recording the fringes of the Fabry-Perot etalon (12 in Fig. \ref{fig:Setup}).
Typical experimental ZPL lineshape is shown in Fig. \ref{fig:ZPLspec}(a).
The spectrum is fitted with a Lorentzian that is shown in the same figure by a solid red line.
The FWHM spectral width $\Delta\nu_{ZPL}$ extracted from the fit lies in the range of 5--20 GHz and increases with the liquid helium temperature as is shown in Fig. \ref{fig:ZPLspec}(b).
Increasing the excitation laser power leads to the saturation of the atomic absorption line and to the spectral broadening of ZPL.
The data reported in Fig. \ref{fig:ZPLspec} have been obtained in the limit of the low excitation power.
\section{\label{sec:Discussion}Discussion}
\subsection{\label{sec:ZPLBroadening} Spectral broadening of ZPL}
The electronic transitions contributing to the zero-phonon line occur without the vibrational excitation of the atomic bubble.
As a result, no phonons or rotons are excited.
The observed temperature-dependent broadening of ZPL in the excitation spectrum can be attributed to the dephasing of the transition dipole of the Dy atom due to the elastic scattering of phonons and rotons already existing in the liquid.
The theory of the ZPL broadening by the scattering of phonons had been developed in \cite{McCumberJAP1963,SmallCPL1978,HsuJCP1984a,HsuJCP1985} for impurity atoms and ions in classical crystalline solids.
The theory predicts a Lorentzian lineshape, in agreement with our observations.
The temperature dependence of the ZPL spectral width $\Delta\nu_{ZPL}(T)$ at low temperatures is determined by the type of the phonons producing the dephasing.
For acoustic phonons, with the density of states described by the Debye model, $\Delta\nu_{ZPL} \propto T^{7}$ \cite{HsuJCP1984b,OsadkoPR1991}.
On the other hand, if the dephasing is due to a pseudolocal phonon mode with a frequency $\Omega$, the broadening is described by Arrhenius law: $\Delta\nu_{ZPL} \propto e^{-\hbar \Omega/k_{B} T}$ \cite{HsuJCP1985,OsadkoPR1991}.
It is not clear a priori, which type of the temperature dependence should be expected for the ZPL of Dy in superfluid He.
The dispersion diagram of the elementary excitations is shown in Fig. \ref{fig:SigmaDisp}.
Here, $\omega$ is the excitation frequency and $k$ is the wave vector.
At low temperatures the spectrum is dominated by acoustic phonons corresponding to the linear part of the dispersion curve at low $k$.
Rotons represent another type of excitations corresponding to the part of the dispersion curve near its minimum at $k$ = 1.9 \AA{}$^{-1}$ \cite{DonnellyJPCRD1998}.
The latter have a well-defined frequency $\omega_{r}/2\pi$ = 0.18 THz and therefore are expected to give a contribution similar to that of the local modes: $\Delta\nu_{ZPL} \propto e^{-\hbar \omega_{r}/k_{B} T}$.
As discussed in \cite{MoroshkinPRB2018}, Dy atom in liquid He is surrounded by a spherical bubble-like void which we call an atomic bubble.
The parameters of this bubble have been computed in \cite{MoroshkinPRB2018} in the frame of a standard spherical atomic bubble model \cite{KinoshitaPRA1995,MoroshkinPR2008}.
The bubble corresponding to the electronic ground state has an equilibrium radius $R_{b} = 5.3$ \AA{}.
In the electronically excited state it expands by approximately 0.15 \AA{}.
The computed undamped eigenfrequencies of the breathing and quadrupolar oscillations of the bubble shape are $\Omega_{0}/2\pi =$ 180 GHz and $\Omega_{2}/2\pi =$ 330 GHz, respectively \cite{MoroshkinPRB2018}.
These frequencies lie within the spectrum of the elementary excitations of bulk superfluid He.
The bubble vibrations thus can be represented as wavepackets of phonons localized around the impurity atom which are referred to as pseudolocal modes.
The experimentally measured $\Delta\nu_{ZPL}(T)$ data in Fig. \ref{fig:ZPLspec} have been fitted with both models:
\begin{align}
\Delta\nu_{ZPL}^{(1)} = \Delta\nu_{0}^{(1)} + A \cdot T^{7} + B \cdot e^{-\frac{\hbar \omega_{r}}{k_{B}T}} \label{eq:FitPower7} \\
\Delta\nu_{ZPL}^{(2)} = \Delta\nu_{0}^{(2)} + C \cdot e^{-\frac{\hbar \Omega}{k_{B}T}} \label{eq:FitArrhenius}
\end{align}
with adjustable parameters $\Delta\nu_{0}$, $A$, $B$, $C$, and $\Omega$.
The fits are shown in Fig. \ref{fig:ZPLspec}(b) by solid lines.
Due to the small temperature range accessible in the experiment, the data can be fitted by both models reasonably well.
The roton contribution in Eq. (\ref{eq:FitPower7}) turns out to be negligibly small and setting $B$ = 0 only improves the uncertainties of $\Delta\nu_{ZPL}^{(1)}$ and $A$.
The fit with Eq. (\ref{eq:FitArrhenius}) returns the value of the frequency of the pseudolocal mode $\Omega/2\pi = 280 \pm 30$ GHz that is significantly larger than the roton frequency and lies in between the frequencies of the breathing and quadrupolar vibrations.
At $T$ = 0 both models extrapolate to $\Delta\nu_{0} \approx$ 5.1 GHz.
This value is significantly larger than the experimental resolution determined by the laser linewidth and therefore represents the intrinsic linewidth of the transition.
Note that the natural linewidth $\Delta \nu_{nat}$ of the $^{5}I_{8} - ^{5}K_{7}$ transition of Dy ($\lambda_{free} = 458.9$ nm) determined by the radiative decay rate of the upper state \cite{NIST_ASD} is $\Delta \nu_{nat}$ = 2.2 MHz, \textit{i.e.} three orders of magnitude smaller than $\Delta\nu_{0}$.
The observed large spectral width can be attributed to the fast quenching of the laser-excited $^{5}K_{7}$ state by radiationless transitions towards the lower-lying excited states, in particular to the $(^{5}I_{8})(^{3}P_{0})$ state which produces the most intense line in the emission spectrum.
\subsection{\label{sec:AtomicBubble} Scattering of phonons by atomic bubbles}
In this section we consider the interaction of the atomic bubble containing a Dy atom with the elementary excitations of superfluid helium.
We use the parameters of the atomic bubble calculated in our earlier publication \cite{MoroshkinPRB2018}.
Our analysis is based on an acoustic model describing the scattering of sound waves on a classical macroscopically large bubble in a liquid \cite{PaoJAP1963}.
In the past, this approach was successfully applied \cite{CelliPR1968,BaymPRL1969} to describe the interaction of phonons with free electron bubbles in liquid He.
The effective cross section of the phonon scattering by the bubble is calculated as a sum over partial waves:
\begin{equation}
\sigma(k,\theta) = \frac{1}{k^{2}} \left| \sum _{L=0}^{\infty} (2L+1) P_{L}(\cos \theta) f_{L}(k) \right|^{2} \label{eq:CrossSection}
\end{equation}
With a partial wave amplitude:
\begin{equation}
f_{L}(k) = i \frac{j_{L}'(kR_{b}) + G_{L} k \rho_{He} v^{2} j_{L}(kR_{b})}{h_{L}'(kR_{b}) + G_{L} k \rho_{He} v^{2} h_{L}(kR_{b})} \label{eq:PartialWaveAmplitude}
\end{equation}
Here, $\rho_{He}$ is the liquid He density, $v$ is the speed of sound, $R_{b}$ is the equilibrium bubble radius, $k$ is the phonon wave vector, $\theta$ is a scattering angle, $j_{L}(x)$ and $h_{L}(x)$ are the spherical Bessel and Hankel functions, $P_{L}(x)$ is Legendre polynomial.
We consider $L = 0$ (breathing) and $L = 2$ (quadrupolar) bubble oscillation modes.
Parameters $G_{L}$ describe the elasticity of the bubble with respect to the corresponding deformation mode.
$G_{0}$ and $G_{2}$ are obtained by introducing extra pressure at the bubble interface $\delta p_{L}(\theta) = p_{L}P_{L}(\cos \theta)$ and computing the resulting bubble deformation $R(\theta) = R_{b} + R_{L} P_{L}(\cos \theta)$.
\begin{equation}
G_{L} = \frac{R_{L}}{p_{L}} \label{eq:BubbleElasticity}
\end{equation}
The calculated values $G_{0} = 0.0126$ \AA{}/bar and $G_{2} = 0.0091$ \AA{}/bar are $\approx$100 times smaller than those obtained for a free electron bubble in \cite{BaymPRL1969}.
The resulting scattering cross section for the atomic bubble is close to that of a hard sphere of the same radius.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig3.eps}
\caption{Dispersion diagram of superfluid He $\omega(k)/2\pi$ \cite{DonnellyJPCRD1998} (black, right axis); calculated scattering cross section $\sigma (k)$ (red, left axis) and the density of elementary excitations at $T=1.5$ K (blue, arb. units). Two shaded bands correspond to the two types of elementary excitations: phonons and rotons.} \label{fig:SigmaDisp}
\end{figure}
The cross section including breathing and quadrupolar vibration modes and integrated over the scattering angle is plotted in Fig. \ref{fig:SigmaDisp} as a function of the phonon wave vector.
$\sigma(k)$ has a peak at $k\approx0.5$ \AA{}$^{-1}$ that closely corresponds to the wave vector of the phonons resonant with the bubble breathing vibration.
The density of elementary excitations in the $k$-space at the absolute temperature $T$ is given by
\begin{equation}
N_{ph}(k) = 4 \pi k^{2} \left(\exp \left[ \frac{\hbar\omega(k)}{k_{B}T}\right] - 1 \right)^{-1} \label{eq:PhononDensity}
\end{equation}
In Fig. \ref{fig:SigmaDisp} $N_{ph}(k)$ is shown on an arbitrary scale for $T$=1.5 K.
It has two maxima corresponding to the phonon and roton branches of the dispersion diagram.
The rate of quasiparticle scattering by the bubble is
\begin{equation}
\Gamma(T) = \int \sigma(k) v_{g} N_{ph}(T,k) dk \label{eq:ScatteringRate},
\end{equation}
where $k_{B}$ is the Boltzmann constant and $v_{g}=d\omega/dk$ is the group velocity.
The dephasing of the atomic transition dipole by the uncorrelated scattering events leads to a Lorentzian lineshape with a FWHM spectral width equal to $\Gamma/\pi$.
The effect is analogous to the so-called impact broadening mechanism \cite{AllardRMP1982} in the gas phase, where the elastic collisions between the atoms lead to the dephasing and to the spectral line broadening.
Here, it is assumed that each scattering event leads to a sudden change of the transition phase by more than 1 radian \cite{AllardRMP1982}.
\begin{figure}
\includegraphics[width=0.6\columnwidth]{Fig4.eps}
\caption{Temperature dependence of the ZPL spectral width (FWHM). Dots and solid line 1 (green) - experimental data, curve 2 (blue) - calculation according to the atomic bubble model, Eqs. (\ref{eq:CrossSection}) - (\ref{eq:ScatteringRate}) including only the excitations from the phonon branch, curve 3 (red) - calculations including both phonons and rotons.} \label{fig:ZPLWidthTheor}
\end{figure}
In Fig. \ref{fig:ZPLWidthTheor} we compare the calculated ZPL spectral width with the experimental data of Fig. \ref{fig:ZPLspec}(b).
Curve 2 is computed by taking into account only the excitations from the phonon branch, $k<1.0$ \AA{}$^{-1}$.
Curve 3 includes both phonons and rotons.
Both calculated dependencies significantly overestimate the experimental data.
At a low temperature the calculated linewidth exceeds the measured value approximately by a factor of two.
As the temperature is increased, the calculated scattering rate increases significantly faster than the experimental line width and the discrepancy increases.
This discrepancy suggests that only a small fraction of thermal phonons and rotons scattered by the bubble leads to a dephasing of the transition dipole.
Scattering of thermal phonons is also responsible for the depolarization of impurity spins and for the broadening of impurity magnetic resonance spectra in liquid and solid He \cite{KinoshitaPRB1994,ArndtPRL1995,FurukawaPRL2006,MoroshkinPR2008}.
In that case, the coupling is very much weaker leading to the spectral line widths of order of 1 Hz.
A more detailed microscopic model of the impurity-phonon (roton) interaction is required for a quantitative interpretation of optical dephasing and spin depolarization data.
\section{\label{sec:Conclusion}Conclusions}
We have investigated the absorption spectrum of the $4f^{10}6s^{2}$ $^{5}I_{8}$ - $4f^{9}5d6s^{2}$ $^{5}K_{7}$ inner-shell transition of Dy atoms embedded in superfluid $^{4}$He.
We have measured for the first time the intrinsic spectral width of the zero-phonon line of an atomic impurity in liquid He and have studied its dependence on the helium temperature in the range of 1.35--2.1 K.
The observed temperature-dependent broadening of ZPL is attributed to the dephasing of the atomic transition dipole by the scattering of thermal phonons on the impurity atom.
The experimental data do not allow one to determine whether the dephasing is caused by acoustic phonons or by a pseudolocal mode corresponding to the vibrations of the atomic bubble.
However, the effect of rotons seems to be negligible.
Intrinsic spectral width of ZPL obtained by the extrapolation to $T$ = 0 is three orders of magnitude larger than the natural linewidth in a free atom.
It is attributed to the shortening of the excited state lifetime by a radiationless quenching.
\begin{acknowledgments}
This work was supported by JSPS KAKENHI grants No JP24000007 and JP17H01145.
\end{acknowledgments}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,180
|
As a new educator, I felt that I needed a place to capture my educational journey, connect with other teachers and push myself to reflect on my teaching experiences. I decided to jump into the world of blogging, and surprisingly I am sticking with it. I was talking with my husband about blog post ideas, and low and behold I got an email that was exciting. The lovely Sarah Cole of Tales of Teaching with Tech had nominated me for a Liebster Award. It is nice to know that someone has noticed my little blog here. I am not doing this for readers, but it exciting that someone might be reading my posts. If you are indeed reading this, thank you.
As a nominee, I have the pleasure of answering a series of questions. I also get to nominate some lovely people as well. It is all about spreading joy and love in the blogging world. I love it!
I started blogging in December 2014 (actually it was a month ago--yesterday!). My goal was to capture pieces of my educational journey during the first years of teaching. I know that it is going to be a wild ride, and I thought that it might be interesting. I had been connected with several teacher bloggers through Instagram, and I thought that it was about time that I started recording the happenings in my classroom.
Relevance. I always try to connect all my lessons to the outside world. All too often we get focused on the happenings of the classroom, and students lose sight of real-world applications of their learning. During all my lessons, I strive to have real-world examples, bring guest speakers and lecturers, and have students create projects that have a connection to their world and their interests. It is challenging, but I'm making it work. We are moving beyond the classroom, away from the desks and into a modern, exciting world!
3. Is there something you learned late in your blogging journey that you wish you would have known before?
As I have only been blogging for one month, I am still new at this. I am so thankful that other bloggers are so kind and willing to help. The community is simply fantastic. When I started to seek advice on picking a blogging name, I am glad that I followed sage advice of other bloggers and picked something that reflected my teaching but did not include my grade level. There has already been talk of moving grade levels next year!
4. What is your favorite past time other than blogging?
I lead a pretty boring life. I am currently completing my masters, participating in California's required beginning teacher support program (formally known as BTSA) and trying to complete my first year teaching. Those veteran teachers out there know that I might be nuts, but I love staying busy. This year I have already been a county-wide professional development speaker (twice!) and I have led a handful of sessions at my district as well. I guess you could say that I am a teaching junkie.
Outside of teaching, I do love spending time with my husband and puppies. We take walks, cuddle on the couch and go on road trips. You can also find me practicing yoga and juggling from time to time.
5. How many hours per week do you dedicate to your blog?
This is all still new to me. I want to start blogging regularly but need to still need to compile more ideas to make that happen. Ideally, I would like to blog once per week, but I'm not sure when that will happen. If you have any ideas, I'm all for it!
6. What category of blog posts do you enjoy the most?
Most of the bloggers that I follow are middle school teachers. They serve as inspiration and offer me so much support. Some of my favorites are: Lessons with Coffee, Lockers, Literature & Life and 4 Mula Fun .
7. Which post that you've written are you most proud of?
Honestly, I am most proud of my classroom page. Technically it is not a post, but it took a lot of thought and planning. I worked a lot in my room over the summer. The post captures the excitement, thrill and lessons learned after receiving my first set of keys. As most teachers, it is really a labor of love. I put my heart into all my classroom projects, and I feel that it now is something that I am really proud of and I hope my students welcome and comfortable in my room.
8. Is there a post that you've been planning to do but have been postponing?
Yes. I have a list of technology themed posts that I have been avoiding. I'm not sure why, but when I start writing, I start feeling that it is not good enough. I keep telling my self to put them out there, but I have not quite hit publish.
9. What is your favorite aspect of blogging?
I love that there is a strong community of teachers that support one another and share the amazing activities happening in their rooms. It inspires me to continue working hard in the classroom, and I am thankful that I can always turn to the blogging world if I need inspiration, help or advice. It is wonderful.
10. Which recipe, project, or idea from my blog would you like to try yourself?
Ironically, I am quite technology focused in my classroom. We use many of the same resources on a regular basis on my room. I would like to start posting about how I use these resources in my room (like your Kahoot post). I use Kahoot in an interesting manner in my room, and I would be nice to share.
11. What is your all time treat/thing that you hide in your desk..that you pull out..when you've had a tough day?
I hide a stash of dark chocolate, coffee and my yoga mat in my desk. There have been days where I spent my lunch period eating nothing but chocolate and doing yoga asanas. I've also been known to blast music in my room to improve my mood.
1. I never saw my self as a middle school teacher, yet was thrilled to be given the opportunity to be with my wonderful 6th graders.
2. I drive the long way home on a regular basis to spend more time in my Miata.
3. I lost over 50 lbs. through Weight Watchers from August 2013-June 2014. I've been at goal for nearly 8 months!
4. Public speaking terrifies me, but I always say, "yes" when asked to present at professional development events.
5. I would rather cuddle on the couch with my puppies than go to parties with human folk.
6. I changed my undergraduate major during the end of my junior year (Political Science to Geology) and graduated with a bachelor's degree in Geology on time.
7. I hate having my face in the water but would stay in the shower all day if we weren't in a drought.
8. My shoe is a women's 11, which makes shoe shopping depressing and extremely difficult.
9. I worry that my students will remember me as a horrible teacher.
10. I go to my parents every Thursday to have dinner. It is one of my favorite traditions.
11. I finally can appreciate being tall (even though it is hard to find long enough pants and maxi skirts).
5. Contact my nominees and let them know that you nominated them.
Thanks for the nomination, Sarah!
Graphics and Fonts Courtesy of: L. Paul Designs For All and KG Fonts.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,669
|
Raniceps raninus, the tadpole fish, is a species of Gadidae fish native to the northeast Atlantic Ocean around the coasts of France, Ireland, and the United Kingdom and the North Sea. This species grows to a total length of . It is of no importance to the commercial fishery industry, though it can be found in the aquarium trade and is displayed in public aquaria.
References
Gadidae
Fauna of the British Isles
Fish of the North Sea
Fish described in 1758
Taxa named by Carl Linnaeus
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,090
|
Do you have to know Yoga to be able to coach with me?
Absolutely not. Yoga has had such an amazingly positive impact on my life that I want to share the yoga love. I might give you one or two "yoga" tools I've learned in my yoga career. You'll love them, I promise.
One-on-one coaching is about taking action, learning, and growing. Expanding beyond what you thought was possible on your own. Coaching sets you free from your limiting beliefs, removes you from running like mad on a hamster wheel, and fear of not being good enough. It creates inner peace and confidence, self-acceptance, and courage. Courage to do what you say you want to do.
We do it all together. Envisioning, then creating the life you always knew you wanted. There may be crying and laughter, but I'm there by your side like your own personal cheerleader. Keeping you accountable and on track.
Coaching works on what is happening in the present. It puts you in the driver's seat of your life. Therapy is diagnostic and treatment based and focuses on past issues with an emphasis on understanding rather than taking action.
Coaching sessions are conducted over the phone at the agreed upon time. A few minutes before the call, make sure you're comfortable and in a place where you can talk freely and easily without distractions.
I believe every soul is unique with its own strengths and challenges. I work with clients for 3 months and longer. I want to set up you for success! After the initial 3 month session, you may decide to work with me for a year or two or not. It's completely up to you!
Women who are ready for change. Women who are ready to put their excuses to rest. Women who are ready to dig in, take a deep breath and look inside. Women who are ready to feel content, confident, and alive. Women who are ready to magnify their brilliance. Does that sound like you?
Start where you are right now in your life, career, relationships, and dreams. What's not working for you? Where do you feel stuck? Where do you want to be? We'll create a map to get you there.
First, apply for the Illumination session to see if we're a fit. Next, we'll dive into our first 90 minute session, where I help you get clear on what you're craving for your life.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,034
|
using System;
using System.Globalization;
using Xamarin.Forms;
namespace MyQuizMobile.Converters {
internal class AnswerResultConverter : IValueConverter {
public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return (string)value == Constants.CorrectAnswerOptionText; }
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return (bool)value ? Constants.CorrectAnswerOptionText : Constants.WrongAnswerOptionText; }
}
internal class AnswerResultTextConverter : IValueConverter {
public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return (string)value == Constants.CorrectAnswerOptionText ? "Richtig" : "Falsch"; }
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return null; }
}
internal class QuestionTypeAnswerConverter : IValueConverter {
public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { return (string)value == Constants.QuestionCategoryQuizText; }
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return null; }
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,371
|
Q: print or select inside SQLServer function In SQLServer sometimes following the code without debugging is necessary. It is possible with print statement or with select statement. The problem is SQLServer does not allow these methods within the functions. That makes the complex function like the black box. I have tried to use write the messages to the text file with stored procedure within the function, however, it doesn't allow either.
Is there any way to track my code like print statement within the function.
A: Your statement
The problem is SQLServer does not allow these methods within the functions
is the problem and the answer at the same time. Yes: This is not allowed within functions.
When I've to deal with larger function code, I usually copy this code into a query window and test it externally. Doing so, you can use PRINT or SELECT ... INTO or any other approach to save some intermediate values.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,891
|
An insert in the slot from the inside to close it up would be enough. These machines would be hidden away. Looks aren't that important. Or they could leave it open for better cooling of the dual drives.
A PCI-E Video GPU that is OpenGL 2.x compliant will push sales of the mini up considerably, especially if it can push a 30" ACD.
All I hope for it that the Mac Mini is user upgradable (RAM and Hard Disk) - i.e. with a Phillips screwdriver, and now with a wall paper scraper and a hail Mary!
One of Apple's best computers!
I don't see any information here. These people are basically saying what a lot of people are saying, which is that the Mini is a very useful machine and Apple would be really stupid to get rid of it. Amazon sales mean very little because as I said in the other thread, the matte MBP model is the highest selling Mac laptop on Amazon (second overall) but look what they did to matte.
I'll reserve judgement until I see an all-metal Mini with firewire, displayport, Nvidia chips, faster hard drives, easy access and a reasonable price. We'll probably find out in about 3 weeks.
Price will likely go up a bit if they do introduce a new model as they normally use the same parts as the Macbook as mentioned. They could keep prices down by removing the optical drive altogether. I don't even use mine any more. Add an extra USB port and I'll get two externals if I need to. If the prices go up by the same as the Macbook, it would make it a fair bit less cost-effective to use them in a server rack and they don't need the optical drives.
The Mac Mini is DEAD!!
I think you've got the idea. There's no way in the world that they developed this new display just for use as a second display with laptops. Talk about a small niche market!
I'll bet the new mini will be a 6" x 6" x 1" aluminum block, internally a MacBook with no display, keyboard, battery, or optical drive, and will plug right into the MagSafe connector on the new display. If it were me, I'd design it so that extra modules would plug into the top just like legos. Optical drive? Plug it in. Bigger hard drive? Better graphics card? Stack them all up. You'd have a stack of 6" x 6" x 1/2" boxes on top of your main box. They'd have to price it so each configuration would cost a little more than an equivalent iMac, but then they would have an expandable Mac that could never be mistaken for an e-waste beige box.
Personally I would LOVE a Mac mini with 1TB HDD. It would be the ultimate media PC. However, it would eat into Apple TV sales so I doubt Apple would make such a device.
Ahh, Mini. I'm so in love with it. Was my first Mac back in the day, they're so cute almost make me cry with joy.
Let's hope they don't neglect it.
The Mac mini is great. Just implement Firewire 800 on it and it will be perfect. At least two Firewire ports.
Mhhhhh, a new Mini... that would be pretty much the only thing that would disrupt (or at least severely delay) my "Buy a Macbook" plans right now... I certainly hope that Apple will breathe new life into this machine!
That line made my soul bleed. *huggles his trusty first-gen Mac Mini* Nuuuu, I won't cut you up!
...KITT LED Light Scanner mod, please?
You can wait until they announce a new version. If you don't like the new one, order the old one from the clearance store.
In my experience, a USB to RS232 adapter works just fine.
Thank god! Thank you, AppleInsider, and macminicolo.net! My god, I didn't even think of all the mini-server applications! Instant scalability--neat idea! But, as I expected, there are many, many commercial installations that take advantage of the mini's unique form factor and stable OS. They also appear to be serving a variety of both media and not-so-media related roles in their installed commercial applications (slot machine back-ends?), which I think is very, very cool. It really is the ultimate kiosk machine and small-system controller/server/whatever. Now, I can confidently await the new MacMini's introduction, hopefully this fall, to complete my home theater, home automation, video surveillance (although, no off-the-shelf, OS X-based CCTV software exists that I know of--it's all Windows-only based stuff), art installations, and other cool, home-project kinda applications! All hail, the mighty MacMini lives another day!
there are currently 3 products totally unsupported: the mini, appleTV and Time Capsule.
no comment on the mini here again, but appleTV (their hobby) is lacking of a few features and Time Capsule, well... overpriced and totally useless given the state of the art amount of data all of us have at home. no expansion option available so far.
The other "fact" is the left over macbook white. Cute to keep it in their list, but what for? well, to match the transition they are undergoing. An entry system sort of as a good excuse for what?
On the other hand, which wouldn't surprise me too much, would be to shrink it to state-of-the-art.
And for sure the mini is selling well. Probably not millions of units as the iPhone, but its the only headless device left (ignoring the MacPro... exchange rates!).
Steve, are you listening? Put the mini in a mini tower (design doesn't matter, if you want, make it a brick ;-) ), let us handle the hardware and we'll buy this product! Even for the price of the current mini you are far off the low price segment here in europe.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,635
|
package org.apereo.cas.ticket.registry.queue;
import org.apereo.cas.JmsQueueIdentifier;
import org.apereo.cas.authentication.CoreAuthenticationTestUtils;
import org.apereo.cas.ticket.TicketGrantingTicketImpl;
import org.apereo.cas.ticket.expiration.NeverExpiresExpirationPolicy;
import org.apereo.cas.util.junit.EnabledIfPortOpen;
import lombok.val;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;
/**
* This is {@link UpdateTicketMessageQueueCommandTests}.
*
* @author Misagh Moayyed
* @since 5.2.0
*/
@EnabledIfPortOpen(port = 61616)
@Tag("JMS")
public class UpdateTicketMessageQueueCommandTests extends AbstractTicketMessageQueueCommandTests {
@Test
public void verifyUpdateTicket() {
var ticket = new TicketGrantingTicketImpl("TGT", CoreAuthenticationTestUtils.getAuthentication(), NeverExpiresExpirationPolicy.INSTANCE);
val cmd = new UpdateTicketMessageQueueCommand(new JmsQueueIdentifier(), ticket);
cmd.execute(ticketRegistry.getObject());
ticket = ticketRegistry.getObject().getTicket(ticket.getId(), ticket.getClass());
assertNotNull(ticket);
assertEquals("TGT", ticket.getId());
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 286
|
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ERMine.Core.Modeling
{
public class Domain : IEquatable<Domain>, IEntityRelationship
{
public string Label { get; private set; }
public IReadOnlyList<string> Values { get; protected set; }
internal Domain(string label)
{
Label = label;
Values = new List<string>();
}
internal Domain(string label, IEnumerable<string> values)
{
Label = label;
Values = new List<string>(values);
}
#region IEquatable
public bool Equals(Domain other)
{
if (other == null)
return false;
return this.Label == other.Label;
}
public override bool Equals(Object obj)
{
if (obj == null)
return false;
var entityObj = obj as Domain;
if (entityObj == null)
return false;
else
return Equals(entityObj);
}
public override int GetHashCode()
{
return this.Label.GetHashCode();
}
public static bool operator ==(Domain domain1, Domain domain2)
{
if (((object)domain1) == null || ((object)domain2) == null)
return Object.Equals(domain1, domain2);
return domain1.Equals(domain2);
}
public static bool operator !=(Domain domain1, Domain domain2)
{
if (((object)domain1) == null || ((object)domain2) == null)
return !Object.Equals(domain1, domain2);
return !(domain1.Equals(domain2));
}
#endregion
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,189
|
This traditional region encompasses the city of Porto, inland along the Douro Valley and north to the border with Spain.
Located along the Douro river estuary in Northern Portugal, Porto is one of the oldest European centres and its historical core has been declared a World Heritage Site by UNESCO. The western part of its urban area extends to the coastline of the Atlantic Ocean. Its settlement dates back many centuries when it was an outpost of the Roman Empire.
One of Portugal's internationally famous exports, port wine, is named after Porto, as the packaging, transport and export of the fortified wine traditionally occurred here. Among the architectural highlights of the city, Porto Cathedral is the oldest surviving structure, together with the small Romanesque Church of Cedofeita, the Gothic Church of Saint Francis, the remains of the city walls and some 15th century houses.
The Douro valley has long been devoted to vineyards and has also been designated by UNESCO as a World Heritage Site. Traditionally, the wine was taken down river to Vila Nova de Gaia in flat bottom boats called rabelos, where it was stored in barrels in cellars. In the mid-20th century dams were built along the river, which ended this river traffic.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,607
|
Guntram est un opéra en trois actes de Richard Strauss sur un livret du compositeur. Il est créé à Weimar le sous la direction du compositeur. L'ouvrage est révisé en 1940.
Distribution
Argument
Guntram membre d'un Ordre de moines-chevaliers qui luttent pour la défense de la justice, vient délivrer un peuple de la tyrannie d'un duc malfaisant, le Duc Robert. Il tombe amoureux de sa femme Freihilde.
Références
Liens externes
Opéra de Richard Strauss
Opéra des années 1890
Opéra en allemand
Opéra créé à Weimar
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,226
|
Q: Incredibuild not very fast Our project takes a long time to compile, so I'm trying the trial version of Incredibuild.
It is made of a solution of about 50 projects.
The thing is, when I compile with incredibuild it doesn't go much faster, it may even last longer some times ...
Here is a screenshot of the graph produced by incredibuild while build a subproject of the solution and it's dependencies, at times it appears to be doing absolutely nothing :
The build without incredibuild took 4 minutes, with it here 5 minutes, most often is falls to 3:30 like in the following case, but still with some gaps:
Any idea what could cause this to happen?
note the problem is the same if I build the whole project
btw the red line is cpu usage, green network in, and blue network out, as reported by incredibuild.
edit: just to be clear and since some people tend to focus on the low compile time of the examples above instead of trying to give a meaningful answer, a full rebuild of the project takes about 1 hour and half
A: The best thing you can do is to send your log file to the IncrediBuild support team at support@incredibuild.com
In order to extract the log file, please follow the following instructions:
*
*Set your Agent's logging level to "Extended" (right click the IncrediBuild tray icon->Agent Settings->"Agent|General" page).
*Run your build.
*Double-click the IncrediBuild tray-icon to open the Build Monitor.
*Select "File->Save Monitor File As..." to save the build progress file and attach the file to your reply.
*Restore your logging level back to "Minimal".
Please mention in your email that Dori referred you to the support department.
Thanks,
The IncrediBuild Team.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,173
|
Pig review roundup: Nicolas Cage film emerges as one of the best films of 2021 with 98 per cent rating on Rotten Tomatoes
When the trailer for Nicolas Cage's Pig came out, few thought it will be one of the best reviewed films of the year, but here we are. Despite the silly sounding premise, the film appears to have depth.
The film, directed by Michael Sarnoski, has scored impressively on review aggregation site: 98 per cent.
The critical consensus reads, "Like the animal itself, Pig defies the hogwash of expectations with a beautiful odyssey of loss and love anchored by Nicolas Cage's affectingly raw performance."
Cage plays the role of a truffle hunter living in wilderness of Oregon who loses his female pig to a kidnapper. To find his lost sow, he has to return home to Portland and also confront his past. That brief description might have been weird for somebody else, it is hunky dory for Nicolas Cage.
Pig is is John Wick, if the puppy was a swine and it was kidnapped, not killed.
Here is what the critics are saying:
AV Club's Mike D'Angelo wrote that the film does not have any "plot twists, in the traditional sense, but each successive encounter reveals a new facet that enriches the tale."
Los Angeles Times' Noel Murray illustrated how Pig is different from John Wick, writing, "Though its plot follows the same rough outline of a "John Wick"-style shoot-em-up, "Pig" is actually a quiet and often melancholy meditation on loss, anchored by a character who wishes he could shake free of the person he used to be."
The Wrap's Carlos Aguilar conceded that while not every ingredient makes sense when put together "the product of their intermingling inside the filmmaker's narrative pot render a special concoction."
San Jose Mercury News' Randy Myers said, "It is Cage who carries "Pig" with a measured performance in which his trademark outbursts pierce the soul. He's magnificent."
Pig released today (July 16) in the US. There is no India release date yet.
← Amid Raj Kundra's porn case, Kangana Ranaut says she'll make exposé film titled 'Tiku weds Sheru' | Bollywood
Jeff Bezos Launches Into Space On Blue Origin's NS-16 — Watch – Hollywood Life →
Jamie Lynn Spears' Daughter Looks Grown Up In Rare Photos – Hollywood Life
Will Smith yells 'we are selling our house' as him and his son spot huge spider | Hollywood
Steven Spielberg debuts his movie memoir The Fabelmans at Toronto Film Festival
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 0
|
Geranomyia opinator är en tvåvingeart som först beskrevs av Alexander 1950. Geranomyia opinator ingår i släktet Geranomyia och familjen småharkrankar.
Artens utbredningsområde är Venezuela. Inga underarter finns listade i Catalogue of Life.
Källor
Småharkrankar
opinator
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,794
|
Where in the world is wealth concentrated? Do the "1%" richest people in the planet influence national and international politics? How can we understand the refugee crisis in Europe? What provoked the political and economic turmoil in Venezuela? In this section, we map and track power in the world, at a geopolitical, financial and cultural level; we address the social, political and economic crises and conflicts affecting us directly or indirectly.
Do extreme wealth inequalities turn societies into plutocracies? This project investigates how the ultra-rich influence politics. The project, developed by students and Prof. Peter Hägel within the billionaireswatch.org course at AUP, is non-partisan and non-profit.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 487
|
\section{Appendix}
We present a range of animations of modes from the families described in the main text. The filenames, along with additional information, are listed in Table~\ref{tab:animations}. All animations are of duration $9000 t_0$ and of simulations at a shear rate of $\dot{\gamma}=(150 t_0)^{-1}$ with a $3_1(-)$ knot. The green sections of the filaments are markers to allow the motion to be more easily followed.
\begin{table}[!htpb]
\caption{\label{tab:animations} Examples of animations of modes belonging to the various families described in the main text. Files may be accessed online~\cite{animations}. Animations were created using VMD~\cite{humphrey}.}
\begin{center}
\begin{tabular}{l|c|c|c|r} \hline
Filename & Mode & Regular/ & $N$ & $\alpha \times 10^3$ \\
& Family & Chaotic & & \\ \hline
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam1r.mpg}{{\color{blue}\underline{fam1r.mpg}}} & I & r & 50 & 0 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam2r.mpg}{{\color{blue}\underline{fam2r.mpg}}} & II & r & 50 & 0 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam2c.mpg}{{\color{blue}\underline{fam2c.mpg}}} & II & c & 50 &0 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/am3r.mpg}{{\color{blue}\underline{fam3r.mpg}}}& III & r & 50 &0.04 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam3r2.mpg}{{\color{blue}\underline{fam3r2.mpg}}} & III & r & 50 &0.16 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam4c.mpg}{{\color{blue}\underline{fam4c.mpg}}}& IV& c & 50 &0.16 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam5r.mpg}{{\color{blue}\underline{fam5r.mpg}}} & V & r & 50 &0.64 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam6c.mpg}{{\color{blue}\underline{fam6c.mpg}}} & VI& c & 50 &1.28 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam7r.mpg}{{\color{blue}\underline{fam7r.mpg}}}& VII& r & 50 &0.64 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam5rN40.mpg}{{\color{blue}\underline{fam5rN40.mpg}}} & V & r & 40 &0.64 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam5rN70.mpg}{{\color{blue}\underline{fam5rN70.mpg}}} & V & r & 70 & 0.64 \\
\href{http://iopscience.iop.org/0295-5075/92/3/34003/media/fam5rN100.mpg}{{\color{blue}\underline{fam5rN100.mpg}}} & V & r & 100 &0.64 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{N50_modes_sup}
\caption{\label{fig:N50_modes_sup} The values of the order parameters, averaged over single runs for $N=50$ filaments at different values of $\alpha$. (a) The angle of the direction of maximum extension to the $z$-axis, $\phi$. The averages of $\phi$ for each run are plotted for a given $\alpha$ in an arbitrary order. The labels indicate the modes to which the different groups of points correspond. (b) The same as (a) but for $C_2$, an order parameter to detect two-fold symmetry about the $z$-axis. Lower values indicate more symmetric configurations. It should be emphasised that all the points within two consecutive vertical lines correspond to different runs at the same $\alpha$ -- the positions along the $x$-axis within each section are irrelevant.
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.25]{chaotic_pow_spec}
\caption{\label{fig:chaotic_pow_spec} Power spectrum calculated by discrete Fourier transform of the displacement about the average drift for the chaotic mode plotted in Fig.~\ref{fig:reg_chaos_inset}. The dashed line has a slope of -2.}
\end{center}
\end{figure}
We next briefly discuss the two order parameters that were used to help group runs into mode families. The first, $\phi$, was the angle of the direction of maximum extension to the $z$-axis, allowed to vary between 0 and $\pi/2$. $\phi$ was determined by finding the eigenvector of the largest eigenvalue of the radius of gyration tensor. The second, $C_2$, was defined as follows
\begin{equation}
C_2 = \frac{1}{NR}\sum_{i}min(\left|\vec{r}_i-\vec{r}_j^{\: \prime}\right|)
\label{C_2}
\end{equation}
where $R$ is the average bead separation and $\vec{r}_j^{\: \prime}$ are the bead positions rotated about the $z$-axis by $\pi$ in the centre of mass frame: the minimum distance from each bead to a bead in the rotated configuration is summed. Smaller values of $C_2$ indicate configurations which are closer to being symmetric under a $\pi$ rotation.
Figs.~\ref{fig:N50_modes_sup} (a) and (b) show the values for these two order parameters for different $\alpha$ for the $N = 50$ results. Each point is the average over one of fifty runs -- they are plotted in an arbitrary order. It should be emphasised that all the points within two consecutive vertical lines are for different runs for the same $\alpha$ -- the different positions along the $x$-axis within each section are irrelevant.
We also include a plot of the power-spectrum of the data for the chaotic mode plotted in Fig.~\ref{fig:chaotic_pow_spec}. This was obtained by taking the modulus-squared of the discrete Fourier transform of the displacement of the average bead position around its overall drift. As may be seen from Fig.~\ref{fig:chaotic_pow_spec}, the exponent of the decay is close to -2 (the measured value is $-1.94\pm0.03$).
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,666
|
Q: Allow to user to Enter only numbers and disable letters can you help me
I have this code
Label (self.window,width=55,text=":Enter your wight ").pack ()
self.kg = StringVar ()
Entry (self.window,width=55, textvariable=self.kg).pack ()
And I want to allow user to enter Numbers only
and I want user Enter maximum 3 numbers and maximum The number 250
please help me And Thank you!
A: Too late but here you go:
def comm(self):
def val():
try:
int(entry.get())
if len(entry.get()) <= 3:
sum = 250 - int(entry.get())
if sum < 0:
entry.delete(0, 'end')
else:
entry.delete(0, 'end')
except:
entry.delete(0, 'end')
root.after(1, val)
entry.bind('<Key>',comm)
replace 'entry' with the Entry name of yours, also the same goes for root
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,335
|
\section{Introduction}
Many concepts of particle physics have a close
relation to superconductivity, for example the
Nambu--Jona-Lasinio model \cite{NJLM}-\cite{ht}
was proposed in analogy to the BCS theory of
superconductivity and is considered as a low-energy effective
theory of QCD. Recently a substantial
progress has been made in the theory of superconductivity
in systems with strong attraction and low carrier
density. That is,
it has been observed that away from
the limits of infinitesimally weak coupling strength
or very high carrier density, BCS-like mean-field theories
are qualitatively wrong and these systems
possess along with superconductive phase
an additional phase where
there exist Cooper pairs but no symmetry is broken
due to phase fluctuations
({\it the pseudogap phase}). What may
be regarded as an indication of the
importance of this concept to particle physics is
that recently the formation
of the pseudogap phase
due to dynamic quantum fluctuations at low $N$
was found in the
chiral Gross-Neveu model in $2+\epsilon $
dimensions \cite{gn1}.
Separation of the temperatures of the pair formation and
of the onset of phase coherence (pair condensation)
in strong-coupling superconductors,
in fact, has been known already for many years (Crossover from BCS
superconductivity to Bose-Einstein Condensation (BEC)
of tightly bound fermion pairs)
\cite{Le,N}.
Intensive theoretical study
of these phenomena in the recent years
(see for example \cite{sc}-\cite{nnnew}),
was sparked by experimental
results on
underdoped (low carrier density)
cuprates that display ``gap-like" feature
{\it above} critical temperature
$T_c$ that disappears only at a substantially higher
temperature $T^*$.
There is experimental evidence that
this phenomenon in high-$T_c$ superconductors
may be connected with precritical pairing
fluctuations above $T_c$.
At present, this crossover
has been studied by variety of
methods
and in many different models.
Because of intimate relationship of
many problems in particle
physics to superconductivity
it seems natural to guess that
the pseudogap may become a
fruitful concept in high energy physics too.
\comment{
The paper is organised as follows:
In section (II) we review
strong-coupling and low carrier density theories
of superconductivity and pseudogap phase since
with it we can gain more insight into
a possible analogous phenomena in QCD.
Then in the section (III)
we discuss the appearance of the pseudogap phase
in the chiral Gross-Neveu model at low N.
Then we show failure of the attempt
of generalization of our results on GN model
to NJL model, namely 4D O(4) non-linear sigma model
approach proposed by Kleinert and Van den Bossche from
which authors \cite{kb}came to the
conclusion of absence
of the chiral symmetry breakdown in NJL
model at zero temperature.
In conclusion we discuss possibility of construction
of a toy model with pseudogap behavior for QCD.}
Below we review these phenomena in superconductors
and discuss its possible implications for QCD.
\section{ Pseudogap phase in
strong-coupling and low carrier density
theories of superconductivity}
\subsection{Perturbative results}
The BCS theory describes
metallic superconductors
perfectly.
However, it failed to describe even qualitatively
superconductivity in underdoped High-$T_c$ compounds.
One of the most exotic properties of the latter materials
is the existence of a
pseudogap in the spectrum of the normal state
well above critical temperature that from an experimental point
of view, manifests itself
as a significant suppression of low frequency spectral weight, thus
being in contrast to the exactly zero spectral weight in the case of the
superconductive gap. Moreover, spectroscopy experiments
show that a superconductive gap evolves
smoothly in magnitude and wave vector dependence to a pseudogap
in normal state. Besides that, NMR and tunneling
experiments indicate the
existence of incoherent Cooper pairs well above $T_c$. In principle it
is easy to guess what is hidden behind these circumstances,
and why BCS theory is incapable of describing it.
Let us imagine for a moment that we are able
to bind electrons in Cooper pairs infinitely tightly -
obviously this implies that the characteristic temperature
of thermal pair decomposition will also be
infinitely high, but this does not imply that the
long-range order will survive at infinitely high temperatures.
As first observed in \cite{N},
long-range order will be destroyed in a similar way, as say, in
superfluid ${}^4$He, i.e., tightly bound Cooper pairs,
at a certain temperature will acquire a nonzero momentum and
thus we will have gas of tightly bound
Cooper pairs but no macroscopic occupation of
the zero momentum level ${\bf q} =0$
and with it no long-range order. Thus phase diagram
of a strong-coupling superconductor has three regions:
\begin{itemize}
\item The superconductive phase where there are condensed fermion pairs.
\item The {\it pseudogap} phase where there exist
fermion pairs but there is no condensate
and thus there is no symmetry breakdown and no superconductivity.
\item The normal phase with thermally decomposed Cooper pairs.
\end{itemize}
Of course, the existence of bound pairs above the critical temperature
will result in deviations from Fermi-liquid behavior that
make the pseudogap phase a very interesting object of
study.
In order to describe superconductivity in such a system
the theory should incorporate pairs with
nonzero momentum. Thus, { \it the BCS scenario
is invalid for description of spontaneous symmetry breakdown
in a system
with strong attractive interaction or low carrier density}
(see \cite{N}, \cite{sc} and references therein). So in principle
in a strong-coupling superconductor onset of long range order has
nothing to do with pair formation transition.
The existence of the paired fermions is only necessary but not sufficient condition for
symmetry breakdown.
The BCS limit is a rather exotic case
of infinitesimally weak coupling strength and high carrier density
when the disappearance of superconductivity
can {\it approximately} be described a as pair-breaking transition.
The strong-coupling limit is another exotic
case where the temperatures of pair decomposition and
symmetry breakdown can be arbitrarily separated. There is nothing surprising
in it: formally, in the case of Bose condensation of ${}^4He$ we can also
introduce a characteristic
temperature of thermal decomposition of the ${}^4 He$ atom;
however this does not mean that this temperature is somehow
related to the temperature of the Bose condensation of the
gas of $ He$ atoms. A schematic phase diagram of a superconductor
is in shown in Fig ~1.
\newpage
\vskip 5 cm
\begin{figure}[tb]
\input Phases.tps
\caption[]{Schematic phase diagram of a superconductor
with arbitrary coupling strength. In the strong-coupling limit,
the temperature of the superconductive phase transition
tends to a plateau value corresponding
to the temperature of Bose condensation of a gas of tightly
bound fermion pairs, whereas the characteristic temperature of
thermal pair decomposition grows monotonically
as a function of the coupling strength.}
\label{phases.tps}\end{figure}
Let us show how one can obtain a
pseudogap phase starting from the
BCS Hamiltonian. This was first done in the pioneering work
by Nozieres and Schmitt-Rink \cite{N}
and in the functional integral formalism for
a system with $\delta$-function attraction
by Sa de Melo, Randeria and Engelbrecht
\cite{R}. In this subsection
we briefly reproduce a part of the transparent article \cite{R}
and in the following section we
will show how qualitatively the same result can be obtained
within nonlinear sigma model (3D XY-model)
approach proposed by the author \cite{sc}.
In the following sections
we discuss analogous nonlinear-sigma model
approach to the similar phenomena in the chiral GN and NJL models.
The Hamiltonian of the BCS model is:
\begin{eqnarray}
H &=& \sum_\sigma \int \! d^D x
\, \psi_\sigma^{\dag} ({\bf x})
\left(-{{\bf \nabla}^2 \over 2m} -\mu\right)
\psi_\sigma({\bf x})
+ g \int\!d^D x\,
\psi_\uparrow^{\dag}({\bf x}) \psi_\downarrow^{\dag}({\bf x})
\psi_\downarrow^{\phantom{\dag}}({\bf x})
\psi_\uparrow^{\phantom{\dag}}({\bf x})
\label{1.0},
\end{eqnarray}
where $\psi_\sigma({\bf x})$ is the Fermi field operator,
$\sigma=\uparrow,\downarrow$
denotes the spin components,
$m$ is the fermionic mass, and
$g < 0 $ the strength of an attractive potential
$ g \delta ({\bf x} - {\bf x}')$.
The mean-field equations for the
gap parameter $\Delta$ and the chemical potential $\mu$
can be obtained with a standard variation procedure:
\begin{eqnarray}
-{1\over g} &=& \frac{1}{V} \sum_{\bf k} {1\over 2 E_{\bf k}}
\tanh{E_{\bf k} \over 2T} ,\label{1.1}\\
n &=& {1\over V} \sum_{\bf k} \left(1-{\xi_{\bf k}
\over E_{\bf k}} \tanh{E_{\bf k} \over 2T}\right),
\label{1.2}
\end{eqnarray}
where the sum runs over all
wave vectors
$\bf k$,
$N$ is the total number
of fermions,
$V $ the
volume of the system,
and
\begin{equation}
E_{\bf k}=\sqrt{\xi_{\bf k}^2 + \Delta^2}
{}~~~\mbox{with}~~~
\xi_{\bf k} = {{\bf k}^2 \over 2 m} - \mu
\label{1.3}
\end{equation}
are the energies of single-particle excitations.
The $\delta$-function potential produces divergence and requires
regularization. A BCS superconductor possesses
a natural cutoff supplied by the Debye frequency $ \omega _D$.
For the crossover problem
to be treated here
this is no longer a useful quantity, since in the strong-coupling
limit all
fermions
participate in the interaction, not only those
in a thin shell of width $ \omega _D$ around the Fermi surface.
To be applicable in this regime,
we renormalize
the gap equation in three dimensions with the help of the
$s$-wave scattering length $a_s$,
for which the low-energy limit of the
two-body scattering process gives an equally divergent
expression \cite{R}:
\begin{equation}
{m \over 4 \pi a_s}
=
{1\over g}
+ {1\over V}
\sum_{\bf k}
{m \over {\bf k}^2} .
\label{1.4}
\end{equation}
Eliminating $g$ from (\ref{1.4}) and (\ref{1.1})
we obtain a renormalized gap equation
\begin{equation}
-{m \over 4 \pi a_s} = {1\over V} \sum_{\bf k}
\left[{1\over 2 E_{\bf k}} \tanh{E_{\bf k} \over 2T}
- {m \over {\bf k}^2} \right],
\label{1.5}
\end{equation}
in which $1/k_Fa_s$ plays the role
of a dimensionless coupling constant which monotonically increases
from $-\infty$ to $\infty$ as the bare
coupling constant $g$ runs from small
(BCS limit) to large values
(BEC limit).
This equation is to be solved
simultaneously with (\ref{1.2}).
These mean-field equations were
analyzed e.g.
in Ref.~\cite{R}.
In the BCS limit, the chemical potential $\mu $
does not differ much
from the Fermi energy
$\epsilon_F$, whereas with increasing interaction strength,
the distribution function $n_{\bf k}$
broadens and $\mu $ decreases, and
in the BEC limit we have tightly bound
pairs and nondegenerate fermions with a large negative chemical
potential $|\mu|\gg T$. Analyzing Eqns.~ (\ref{1.2}) and (\ref{1.5})
we have from (\ref{1.5}) the critical
temperature in the BCS limit ($\mu \gg T_c$)
$T_c^{\rm BCS}=8e^{-2}e^\gamma\pi^{-1}\epsilon_F \exp(-\pi/2k_F|a_s|)$
where $\gamma=- \Gamma '(1)/ \Gamma (1) = 0.577 \dots~$,
from (\ref{1.2}) we have that chemical
potential in this case is $\mu = \epsilon_F$.
In the strong coupling limit, from
Eqs~. (\ref{1.2}),
(\ref{1.5})
we obtain that in the BEC limit $\mu = - E_b/2$,
where $E_b=1/m a_s^2$ is the binding energy of the bound pairs.
In the BEC limit, the mean-field eq. (\ref{1.2})
gives that the "gap" sets in at
$T^* \simeq E_b/2 \log(E_b / \epsilon_F)^{3/2}$.
A simple ``chemical'' equilibrium estimate
$(\mu_b=2\mu_f)$ yields for the temperature
of pair
dissociation: $T_{\rm dissoc} \simeq E_b/\log(E_b/\epsilon_F)^{3/2}$
which shows that at strong couplings
$T^*$ is indeed related to pair formation
\cite{R} (which in the strong-coupling regime
lies above the temperature of the onset of
phase coherence \cite{N,R}). Obviously
$T^*$ is a monotonous function of coupling strength.
If we take into the account gaussian
fluctuations we can see that
in the strong coupling regime, the
temperature $T^*$, obtained with the above estimate
is not related in any respect to the critical temperature
of the onset of phase coherence.
Expression for the thermodynamic potential
with gaussian corrections reads \cite{N,R}:
\begin{equation}
\Omega= \Omega_0 - T \sum_{{\bf q}, i q_l }
\ln\Gamma ( {\bf q}, i q_l),
\end{equation}
where
\begin{equation}
\Gamma^{-1} ( {\bf q}, i q_l) =
\sum_{\bf k} \left\{ \f{1 -n_{\bf k} -n_{\bf k+q}}{i q_l-\xi_{\bf k} -
\xi_{\bf k +q}} + \f{m}{ {\bf k}^2} \right\} -\f{m}{4 \pi a_s}.
\end{equation}
Where $n_{\bf{k}}$ is Fermi occupation and $i q_l = i T 2 \pi l $.
It is convenient following to \cite{N} to rewrite $\Omega$ in terms
of a phase shift according to definition:
$\Gamma( {\bf q}, \omega \pm i0) = |\Gamma({\bf q}, \omega)| \exp(\pm i
\delta ({\bf q}, \omega))$. After inclusion of Gaussian correction the
number equation $N= - \partial \Omega/ \partial \mu$ reads:
\begin{equation}
n=n_0(\mu,T) + \sum_{\bf q} \int_{-\infty}^\infty \f{d \omega}{\pi}
n_B(\omega)
\f{\partial \delta}{\partial \mu}({\bf q}, \omega)
\label{num}
\end{equation}
Where $n_0$ is density of "free" fermions defined in (\ref{1.2}) and
$n_B(\omega) =1/(\exp (\omega/T)-1)$ is the Bose function.
In order to study behavior of $T_c$ one should solve
a set of the number and gap equations.
In the BCS limit $T_c$ is not affected substantially by Gaussian
corrections, thus the superconductive transition
can be described by mean-field theory and correspondingly $T_c \approx T^*$.
\footnote{As first discussed
in 1960s, even in BCS superconductors there is a narrow region
of precritical pairing fluctuations.
This gives rise, e.g., to the so-called paraconductivity effect.
In particle physics, this
phenomenon
was pointed out
by Hatsuda and Kunihiro \cite{ht2,ht} }
In the opposite limit, numerical solution \cite{N,R} show
that the temperature of the superconductive phase transition tends
to a constant value that does not depend on the coupling strength
and is equal to the temperature of condensation of the
ideal Bose gas of particles of mass $2m$ and density $n/2$,
where $m$ and $n$ are the mass and density of electrons correspondingly:
\begin{equation}
T_c= \left[\frac{n}{2\zeta(3/2)}\right]^{2/3}\f{\pi}{m}= 0.218 \epsilon_F
\label{bose}
\end{equation}
Where $\epsilon_F$ was used simply as a dimensional constant, namely
the Fermi energy of the gas of free fermions with the density $n$ and mass $m$
(obviously at very strong coupling
strength, when all fermions are paired there is no Fermi surface).
The system of the gap and number equations can be solved
analytically in the strong-coupling limit. First
as it was pointed in \cite{N,R} one can make the following approximation:
to retain Gaussian corrections
only in the number equation and solve it together with mean-field
``gap" equation. Near $T_c$ in the strong -coupling regime
one finds that $\mu(T_c) = - E_b/2$, where $E_b$ is the
energy required to break a pair.
One can observe that in this limit $\Gamma({\bf q}, z)$ has an
isolated pole on the real axis for each $q$, representing
a two-body bound state with momentum $\bf q$. Since formally in this limit
we can make energy required to break a pair arbitrarily large, this
pole is widely separated from the branch cut representing
the continuum of two-particle excitations. The low energy
physics at temperatures much lower that temperature of thermal pair decomposition
is governed by this pole and one can write
$\Gamma({\bf q}, i q_m) \simeq R ({\bf q})/[ i q_m
- \omega_b( {\bf q }) +2\mu]$,
where $\omega_b ({\bf q}) \simeq - E_b + |{\bf q}|^2/4m$.
The partition function then may be written in the following form:
\begin{equation}
Z=Z_0 \int d\bar\phi d\phi \exp\left\{ \sum_{{\bf q}, iq_l }\bar\phi_q
(iq_l -\omega_b({\bf q}) + 2 \mu) \phi_q \right\}.
\end{equation}
Correspondingly the strong-coupling
number equation reads:
\begin{equation}
n=n_0+\sum_{\bf q}n_B[\omega_b({\bf q} ) - 2 \mu].
\end{equation}
From which neglecting $n_0$ follows the result (\ref{bose}) \cite{N,R}
\footnote{The above estimate by \cite{N,R} when we retained
corrections only in the number equation renders correct limiting result
(\ref{bose}) however as it was observed by the authors of
\cite{N,R} in the intermediate coupling regime it gives
an artificial maximum in $T_c$ as a function of coupling strength,
this artifact is removed in higher approximations \cite{H}}.
\subsection{Nonperturbative nonlinear sigma-model
(NLSM) approach to the BCS-BEC crossover
in superconductors}
The above discussed crossover in the BCS model
was recently studied in details perturbatively
in a variety of approximations
(see for example \cite{H,Tch}).
Qualitatively, essential features of this crossover can be reproduced
in another simple model system - with
the help of deriving of an effective non-linear sigma
model (i.e. 3D XY-model) \cite{sc}. Moreover in the same framework of
nonlinear sigma model
one can study as well analogous crossover in 2D superconductor
\cite{Dr,Em,sh,sc}
that can not be addressed with discussed in the previous section
perturbative method
due to absence of the long-range order in 2D.
In two dimensions $T_c$ is identified with a temperature of the
Kosterltz-Thouless transition $T_{KT}$ in 2D XY-model.
In the strong-coupling (or low carrier density) regime $T_{KT}$
lies significantly lower than the temperature
of the pair formation \cite{Dr,Em,sh,sc}.
\comment{
can be relatively easily studied perturbatively in the limiting cases
of strong and weak coupling strength even though
in the above approximation one can hardly be addressed
analytically in the entire crossover region.
The reasonable question is whether there is any reason
to employ nonperturbative nonlinear sigma model
approach that is discussed below.
In fact nonlinear sigma
model approach is essentially more simple, it can
be much easier addressed analytically for
arbitrary coupling strength and carrier density, it is
appropriate for description of this phenomena
in three as well as two dimensional systems and
as will be discussed below it provides a link
between continual and lattice models of BCS-BEC crossover.}
As we have shown in \cite{sc}, many essential features known
from numerical study of strong-coupling
and low carrier density superconductors
are reproduced with very good accuracy
within $3D XY$-model approach.
In particular, in the framework of
NLSM approach there is no artificial maximum
in $T_c$ in the regime of
intermedium couplings (which appears in the
approximation discussed in the previous section).
\comment{
NLSM approach provides also a link between BCS-BEC crossover
in superconductors and analogous phenomenon
in the chiral Gross-Neveu model in $2+\epsilon$ dimensions
at zero temperature that is discussed in the next section.}
\subsubsection{XY-model approach to 3D superconductors}
Let us now reproduce results of previous subsection in the
framework of NLSM approach proposed by the author \cite{sc}.
It is very transparent to study
the properties of the BCS-BEC crossover
to employ "modulus-phase" variables.
Following to Witten \cite{W} we can
write a partition function
as a functional integral over
modulus and phase of the Hubbard-Stratonovich field $\Delta \exp( i \varphi) $:
\begin{equation}
Z(\mu, T) = \int {\cal D} \Delta\,
{\cal D} \varphi \exp{[-\beta \Omega (T, \Delta(x), \partial \varphi
(x))]}.
\label{bk1}
\end{equation}
Assuming that phase fluctuations do not affect
local {\it modulus}
of the complex Hubbard -Stratonovich field we can
write thermodynamic potential as a sum of
``potential" and ``kinetic" (gradient) terms:
\begin{eqnarray}
\label{ek8}
\Omega (\Delta(x), \partial \varphi(x)) \simeq
\Omega _{\rm grad} (T, \Delta, \partial \varphi(x)) +
\Omega _{\rm pot} (T, \Delta) =
\int d^3 x
\frac{J(T, \Delta)}{2} (\nabla \varphi)^{2}
+
\Omega _{\rm pot} (T, \Delta).
\end{eqnarray}
Obviously in the
above expression the effective potential $ \Omega _{\rm pot} (T, \Delta) $
coincides
with the ordinary mean-field effective potential.
The gradient term
$\Omega _{\rm grad} (T, \Delta, \partial \varphi(x)) $
coincides with the Hamiltonian of 3D XY model with a stiffness $J(\Delta, T)$.
Let us reproduce low-temperature
expression for the phase stiffness in strong-coupling regime
from \cite{sc}:
\begin{equation}
J=\frac{n}{4m} - \frac{3 \sqrt{2 \pi m}}{16 \pi^2} T^{3/2}
\exp\left[-\frac{\sqrt{\mu^2+\Delta^2}}{T}
\right].
\label{@stiff@}
\end{equation}
Where $n$ and $m$ are density and mass of fermions.
We can see that thermal corrections to the first
term in this regime are exponentially suppressed and the r.h.s.
tends in this limit quickly to
\begin{equation}
J_{BE}=\frac{n}{4m}.
\end{equation}
The form of this expression is not surprising - we
see that at sufficiently strong coupling strength
all fermions are bound to pairs and stiffness
becomes equal of the low-temperature
phase stiffness of the Bose gas
of density $n/2$ and boson
mass $2m$. Obviously
all information about internal structure of composite
bosons is evaporated from this expression in this
approximation
since at low temperature in this regime there
are no thermal pair decomposition effects
\footnote{In \cite{sc} we employed a
finite-temperature generalization of the gradient expansion at $T=0$
discussed in \cite{ash}.}.
Thus we see that
low-temperature expression
for the stiffness of the phase fluctuations reaches a plateau
value with increasing coupling strength,
whereas temperature of the
thermal pair decomposition
is a monotonously growing function of the
coupling strength.
In principle knowledge of lowest gradient term
governing gaussian fluctuations is not sufficient
for the study of the position of the phase decoherence transition
in a system with preformed pairs.
In continuum, 3D XY model is a free field theory and there
is no phase transition. Phase transition of 3D XY model
was studied in great details on a lattice, so we can consider
a lattice model of BCS-BEC crossover and we can
verify aposteriory to what extend lattice model
reproduces features of this crossover in continual model.
Many aspects of relation of 3D XY model
to Bose condensation, in particular derivation
of the Gross-Pitaevskii equation near $T_c$
can be found in \cite{GFCM}.
So, let us consider theory (\ref{ek8}) on a lattice
with spacing $a=1/n_{pair}^{1/3}$ where $n_{pair}$ is
concentration of Cooper pairs. This model
describes condensation of hard-core bosons on the
lattice and is a special case of Bose-Hubbard model
\footnote{In the ordinary attractive Hubbard model considered
in \cite{N} the critical temperature is a nonmonotonous function
of the coupling strength: $T_c$ decreases in
the strong-coupling limit due to in that model composite
bosons move via virtual ionization.}.
Critical temperature of the phase transition of
3D XY model can be obtained with a simple mean-field
estimate \cite{GFCM} \footnote{ We should
stress that estimation of critical
temperature of the phase transition of the effective 3D XY model
with mean-field methods has nothing to do with BCS mean field
approximation since derivation of the phase stiffness
coefficient (\ref{@stiff@})
required studying the gaussian fluctuations in the BCS model
\cite{sc}, and thus
this may be regarded as the approximation of the same level
as considered in the previous section.}:
\begin{equation} \label{tc3d}
T_c^{3D XY} \approx 3 J a
\end{equation}
Where $a=n_{pair}^{-1/3}$ is the lattice spacing.
In contrast to the ordinary $3D XY$-model, in order to find
temperature of the phase transition we should study
a system of the equations (\ref{1.2}), (\ref{1.5}) , (\ref{@stiff@}) and
(\ref{tc3d}). System of these equations can be solved
however analytically
in the strong coupling limit, the result is \cite{sc}:
\begin{equation}
T_c=
\frac{3}{2m} \left[ \left(\frac{n}{2}\right)^{2/3}-
\frac{1}{n^{1/3} } \frac{1}{2^{ 7/6}\pi^{3/2}} T_c^{3/2} m^{3/2}
\exp\left( -\frac{\sqrt{\mu^2+\Delta^2}}{T_c}
\right)
\right]
\label{chaoslabs}
\end{equation}
With increasing
coupling strength,
this quickly tends from below to the value (compare with (\ref{bose})):
\begin{equation}
T_c = \frac{3 n^{2/3}}{2^{5/3} m} =
\frac{3}{(6 \pi^2)^{2/3}}\epsilon_F \approx 0.2 \epsilon_F.
\label{tcsig}
\end{equation}
Where constant $\epsilon_F$
is Fermi energy of free fermi gas of density $n$ and fermion mass $m$.
We should observe that in nonperturbative NLSM
approach $T_c$ approaches plateau value (\ref{tcsig}),
that depends only on mass $2m$ and density $n/2$
of composite bosons,
from below. This is in agreement
with numerical study in higher approximations
\cite{H},
whereas in the approach presented in previous
subsection $T_c$ has an artificial maximum at the intermediate
coupling strength thus approaching the limiting value from above.
Another circumstance that we discuss below is that
NLSM approach gives also qualitatively correct results in the
opposite limit of weak-coupling strength \cite{sc}.
In the weak-coupling limit near $T_c$, the stiffness coefficient
may be derived with the help of Gorkov's well-known
method :
\begin{equation}
J_{\rm BCS}=\frac{7}{48 \pi^4} \zeta(3) \frac{p_F^3}{m} \frac{\Delta^2}{T^*{}^2},
\label{@stiff}\end{equation}
This is precisely the coefficient
of the gradient term in the Ginzburg-Landau expansion.
In the weak-coupling limit the two temperatures of
the onset of pairing correlations and the
onset of phase coherence $T^*$ and $T_c$
merge according to the formula \cite{sc}:
\begin{equation}
{T_c }= {T^* } - \frac{(2 \pi^2)^{2/3}}{2}
\frac{T^*{}^{5/2}}{\epsilon_F^{3/2}} \rightarrow T^*,
\end{equation}
With it one can see \cite{sc} that in the weak-coupling
limit, the temperature of the phase transition of the
effective XY-model tends from below to
the characteristic temperature of the disappearance of the
effective potential and merges with it
for infinitesimally weak coupling strength.
Thus we arrive at some sort of aposteriori verification
of BCS behavior in this limit in the model
of hard-core composite bosons on the lattice (i.e. in this limit
if nonzero modulus of the complex gap function
$ \Delta e^{i \varphi (x)}$
appears at some temperature, at the same
temperature phase coherence is established and
continuous symmetry is broken). In the weak and moderate
coupling strength limits the
disappearance of superconductivity is a competition of two
processes - pairbreaking which is thermal excitation
of individual particles and decoherence process which
is thermal excitation
of collective modes.
Let us now summarize the results that follow from the NLSM
consideration. In this model in strong-coupling or
low carrier density regimes system possesses three
phases:
\begin{enumerate}
\item Superconductive phase ($T < T_c^{3D XY}$).
\item Pseudogap phase ($T_c^{3D XY} < T < T^*$) -
the phase where there exists a local gap modulus $\Delta$
that signalizes existence of the tightly bound (but noncondenced)
fermion pairs
but phase is random so average of the complex gap is
zero ($<|\Delta| \exp( i \phi)> = 0$). So in this phase
there is no superconductive gap and with it no
symmetry breakdown.
\item Normal phase ($T>T^*$) the phase with
thermally decomposed Cooper pairs.
\end{enumerate}
\subsubsection{ XY-model approach to 2D superconductors}
In two dimension there is no proper long-range order
and superconductive phase transition is associated
with a Kosterlitz-Thouless transition. In order to study
this transition it is sufficient to extract lowest gradient term
that determines temperature of the phase transition according to the formula
\cite{Dr},\cite{Em}\cite{sh},\cite{sc} \footnote{ In principle there is no
KT phase transition in a charged system due to Meissner effect, however
coupling to electromagnetic field
is always neglected in discussion of 2D superconductors, due
to experimentally in-plane penetration length in high-$T_c$
materials is much larger
than coherence length.}:
\begin{equation}
T_{\rm KT}=\frac{\pi}{2} J(\mu, T_{\rm KT}, \Delta(\mu, T_{\rm KT})).
\label{e1}
\end{equation}
This equation just like in the discussed above 3D case should
be solved self-consistently with equations for the gap modulus
and chemical potential (\ref{1.2}), (\ref{1.5}).
The result in the strong-coupling limit is \cite{sc}:
\begin{eqnarray}
T_{\rm KT} \simeq
\frac{\pi}{8} \frac{n}{m}
\left\{
1 - \frac{1}{8} \exp\left[
\frac{2\mu}{\epsilon_F} -4
\right]
\right\}.
\label{e32}
\end{eqnarray}
Thus for increasing coupling strength,
the phase-decoherence temperature $T_{\rm KT}$
tends very quickly towards a constant value \cite{Dr,Em,sh,sc}
corresponding
to KT transition in system of bosons with density $n/2$
and mass $2m$ (whereas characteristic temperature
of the thermal pair decomposition continues to grow monotonously
with the growing coupling strength):
\begin{equation}
T_{KT} = \frac{\pi}{8} \frac{n}{m}.
\label{e302}
\end{equation}
So, this phenomenon in 2D is qualitatively similar
to the above discussed 3D case.
\vskip 1cm
There is no superconductivity in the pseudogap phase
however it
exhibits rich exotic non-Fermi-liquid behavior due to local
pairing correlations that makes it as
interesting an object of theoretical and
experimental study as the superconductive phase itself.
In particular,
along with specific heat, optical conductivity
and tunneling experiments there are following
circumstances observed in the pseudogap phase:
In experiments on YBCO a significant suppression of
in-plane conductivity $\sigma_{ab}(\omega)$
was observed at frequencies below 500 ${\rm cm}^{-1}$ beginning
at temperatures much above $T_c$.
Experiments on underdoped samples revealed
deviations from the linear resistivity law. In particular
$\sigma_{ab}(\omega=0;T)$
increases slightly with decreasing $T$
below a certain temperature.
NMR and neutrons observations
show that below temperatures $T^*$ much higher than $T_c$,
spin susceptibility starts decreasing.
In conclusion,
let us once more emphasis essential features of this phenomenon
in superconductivity:
\begin{itemize}
\item Away from a very special limit of infinitesimally
weak coupling strength and high carrier density, superconductors
are characterized by two temperatures $T_c$ and $T^* (>> T_c)$.
$T^*$ is the characteristic
temperature below which
pair correlations become important (or in the regime
of strong interaction it is the characteristic
temperature of the formation of
real bound pairs). $T_c$ corresponds
to the onset of phase coherence
in a system of preformed fermion pairs. The region
of non-Fermi liquid behavior
between $T_c$ and $T^*$ calls {\it the pseudogap phase},
however the term ``pseudogap", originated in early experimental
papers, may seem somewhat misleading since, even though
a substantial depletion of low-frequency spectral weight is
observed in this region experimentally - there is
{\it no superconductive gap in the spectrum}.
\item One should note that there is { \it no
proper phase transition at $T^*$}, which
is simply a characteristic temperature of thermal
decomposition of certain fraction of noncondensed Cooper
pairs. Even though the position of this temperature
may be reasonable estimated with mean-field methods,
second-order phase transition at $T^*$ is certainly an
artifact of the discussed above approximation. Experiments on specific
heat indicate however certain features at this characteristic
temperature.
\end{itemize}
In what follows we discuss possible implication of these
results to QCD that may posses
a phase
analogous to the pseudogap phase in strong-coupling superconductors.
The simplest model related to particle physics
that displays pseudogap behavior of dynamical
origin is Chiral Gross-Neveu model at low $N$ that is discussed in the next
section in $2+ \epsilon$ dimensions.
\section{Pseudogap phase in Chiral Gross-Neveu model
in $2 + \epsilon$ dimensions at low N}
Let us now discuss a phenomenon similar
to pseudogap in a simple
field-theoretic model -
chiral version of the Gross-Neveu model \cite{GNM},
whose Lagrange density is
\begin{eqnarray} \label{8.67b}
{\cal L} = \bar\psi_a i\sla{\partial}
\psi _a + \frac{g_0}{2N}
\left[
\left( \bar\psi _a \psi _a\right) ^2
+\left( \bar\psi _a i \gamma_5 \psi _a\right) ^2
\right] .
\end{eqnarray}
Where index $a$ runs from $1$ to $N$.
Appearance of the pseudogap phase in this model
has quite similar roots with the above discussed
phenomenon in strong-coupling superconductors.
In superconductors we observed appearance of the
pseudogap phase on the phase diagram in the region
away from the limits of
infinitesimally weak coupling strength or extremely high
carrier density - i.e. in the regime when
BCS mean-field treatment is no longer valid.
Chiral Gross-Neveu model can be treated
in the limit of infinite number of field components $N$
in a framework of mean-field approach quite similar to BCS theory. Transparently in
the mean-field approximation one can find only one phase transition
at certain value of the coupling strength, similar to BCS
phase transition. However, at low $N$, system start
to perform dynamic chiral fluctuations which
as we have shown in \cite{gn1} give rise to
a second, phase disorder, transition.
So at low $N$ the model possesses two
transitions at two characteristic values of
renormalized coupling constant.
Let us reproduce this result.
One can write the collective field action for this model as:
\begin{eqnarray} \label{8.74b}
{\cal A}_{\rm coll} [\sigma ] = {N} \left\{ -
\frac{1}{2g_0} (\sigma ^2+\pi^2) - i \mbox{Tr\,} \log
\left[ i \sla{\partial} - \sigma (x)-i \gamma_5\pi\right]
\right\}.
\end{eqnarray}
This expression is invariant under the continuous set of chiral O(2)
transformations which rotate $ \sigma$ and $ \pi$ fields into each other.
This model is equivalent to
another one:
\begin{eqnarray}
{\cal L} = \bar{\psi}_a i \sla\partial
\psi _a + \frac{g_0}{2N} \left( \bar \psi _a C
\bar\psi _a^T\right) \left( \psi _b^TC\psi _b\right).
%
\label{8.143}\end{eqnarray}
Here $C$ is the
matrix of charge conjugation which is defined by
%
\begin{eqnarray}
C\gamma ^\mu C^{-1} = -\gamma ^{\mu T}.
\label{8.144}\end{eqnarray}
In two dimensions, we choose the $ \gamma$-matrices
as
%
$\gamma ^0 =
\sigma^1,
~\gamma ^1 =
-i\sigma ^2$,
and $C=\gamma ^1.$
%
The second model goes over into the first by replacing
$
\psi \rightarrow \frac{1}{2}(1-\gamma _5) \psi + \frac{1}{2}(1+\gamma _5)
C\bar\psi ^T,
$ where superscript T denotes transposition.
In the Lagrange density (\ref{8.143}) we introduce a complex
collective field by adding
a term $
(\lfrac{{N}}{2g_0} )\left| \Delta - \frac{g_0}{{N}}
\psi ^T_b C \psi _b\right| ^2,$ leading to the partition function
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\! Z[\eta,\bar\eta] = \int {\cal D} \psi {\cal D} \bar\psi
{\cal D} \Delta {\cal D} \Delta^{\scriptsize \dagger}
\exp\left\{ i \int d^Dx \left[ \bar\psi _a i\sla{\partial }
\psi _a + \frac{1}{2} \left( \Delta ^{\scriptsize \dagger} \psi _a^T
C \psi _a + {\rm c.c.}\right) + \bar\psi \eta + \bar\eta
\psi - \frac{N}{2g_0} | \Delta |^2\right] \right\} .
\label{8.48}\end{eqnarray}
The relation with the previous collective fields
$ \sigma$ and $\pi$ is $ \Delta= \sigma+i \pi$.
\comment{
In order to integrate out the Fermi fields we rewrite the free part of
Lagrange density in the matrix form
\begin{eqnarray}
\frac{1}{2} \left( \psi ^T C,\bar\psi \right)
\left( %
\begin{array}{cc}
0 & i\sla\partial \\{}
i\sla\partial & 0
\end{array} \right)
\left(
%
\begin{array}{c}
\psi \\{}
C \bar\psi ^T
\end{array}\right)
\label{8.149}\end{eqnarray}
which is the same as $\bar \psi i \sla\partial
\psi $, since
%
$ \psi ^T CC\bar\psi ^T =
\bar\psi \psi
,~
\psi ^T C\!\! \stackrel{\leftrightarrow }{\sla\partial }\!\! C \bar\psi ^T
= \bar\psi\!\! \stackrel{\leftrightarrow }{\sla\partial }\!\! \psi$.
But then the interaction with $\Delta $ can be combined with
(\ref{8.149}) in the form
$\frac{1}{2} \phi ^T_i G_\Delta ^{-1} \phi$,
where
%
\begin{eqnarray}
\phi = \left( %
\begin{array}{c}
\psi \\{}
C\bar\psi ^T
\end{array}\right)
,~~ \phi ^T = \left( \psi ^T, \bar\psi C^{-1}\right)
\label{8.152}\end{eqnarray}
are doubled fermion fields, and
\begin{eqnarray}
iG_\Delta ^{-1} = \left(
%
\begin{array}{cc}
C & 0 \\{}
0 & C
\end{array} \right)
\left( %
\begin{array}{cc}
\Delta & i \sla\partial \\{}
i\sla\partial & \Delta ^{\scriptsize \dagger}
\end{array} \right) = -\left( iG_\Delta ^{-1}\right) ^T
\label{8.153}\end{eqnarray}
is the inverse propagator in the presence of the external field $\Delta $.
Now we perform the functional integral over the fermion fields,
and obtain
\begin{eqnarray}
Z[j] = \int {\cal D}\Delta {\cal D} \Delta ^{\scriptsize \dagger}
e^{{iN}{\cal A} [\Delta ] + \frac{1}{2} j_a^T
G_\Delta j_a},
\label{8.155}\end{eqnarray}
where ${\cal A}[\Delta ]$ is the collective action
\begin{eqnarray}
{\cal A}[\Delta ] = - \frac{1}{2} |\Delta |^2 - \frac{i}{2}
\mbox{Tr\,} \log i G_\Delta ^{-1}
\label{8.156}\end{eqnarray}
and $j_a$
is the doubled version of the external source
\begin{eqnarray}
j = \left(
%
\begin{array}{c}
\bar\eta^T \\{}
C^{-1}\eta
\end{array} \right) .
\label{8.157}\end{eqnarray}
This is chosen so that
$
\bar\psi \eta + \bar\eta \psi = \frac{1}{2}
\left(j^T \phi - \phi ^T j\right)$.
In the limit $N \rightarrow {\infty} $, we obtain
from (\ref{8.155})
the effective
action
%
\begin{eqnarray}
{\frac{1}{{N}} \Gamma [\Delta , \Psi ] =}
\frac{1}{2g_0} |\Delta| ^2 - \frac{i}{2} \mbox{Tr\,} \log
i G_\Delta ^{-1} + \frac{1}{{N}} \bar\Psi_a
i G_\Delta ^{-1} \Psi _a
\label{8.158}\end{eqnarray}
in the same way as in the last chapter
for the simpler model with a real $ \sigma $-field.
The ground state has $\Psi = 0$, so that the minimum
of the effective action
implies for $\Delta_ 0$ either $ \Delta_0=0$ or
the gap equation
\begin{eqnarray}
1 = \frac{{g_0}}{2} \mbox{Tr\,} G_{\Delta_0},
\label{8.159}\end{eqnarray}
where we may assume $\Delta_0$ to be real.
With the Green function
\begin{eqnarray}
G_{\Delta_0 } (x,y) = \int
\frac{d^Dp}{(2\pi )^D} e^{-ip(x-y)} \frac{i}{p^2- \Delta _0}
\left(
%
\begin{array}{cc}
\Delta _{0} & \sla p \\{}
\sla p & -\Delta _0
\end{array} \right)
\left(
%
\begin{array}{cc}
C^{-1} & 0 \\{}
0 & C^{-1}
\end{array}
\right) ,
\label{8.160}\end{eqnarray}
the gap equation (\ref{8.159}) takes the
same form
as
(\ref{@gapeq}):
%
}
Following to BCS procedure we can fix phase of the
order parameter, then for a constant $\Delta$, the effective
action gives rise
to an effective potential that in $2+\epsilon$ dimensions
reads:
\comment{
\begin{eqnarray} \label{8.81}
&&{ \frac{1}{{N}} v(\Delta )
= - \frac{1}{{N}} \Gamma [\Delta ]
= }
\frac{1}{2g_0} \Delta^2 - \mbox{tr\,} (1) \frac{1}{2}
\int \frac{d^D p_E}{(2\pi )^D} \log
\left[ p^2_E + \Delta^2\right].
\end{eqnarray}
Performing the integral yields in $D=2+ \epsilon$ dimensions
with $ \epsilon>0$}
\begin{eqnarray} \label{8.84}
\frac{1}{N} v(\Delta )= \frac{\mu ^\epsilon }{2 }
\left[ \frac{\Delta^2}{g_0\mu ^\epsilon }
- b_\epsilon \left( \frac{\Delta }{\mu }
\right) ^{2 + \epsilon } \mu ^2\right],
\end{eqnarray}
where
$\mu $
is an arbitrary mass scale, and
the constant $b_\epsilon $ stands for
\begin{eqnarray} \label{8.85}
b_\epsilon = \frac{2}{D} 2^{\epsilon /2} ~\bar{}\!\!S_D
\Gamma (D/2) \Gamma (1 - D/2) = \frac{2}{D} \frac{1}{(2\pi )^{D/2}}
\Gamma (1- D/2) ,
\end{eqnarray}
which has an $ \epsilon$-expansion
$b_\epsilon \sim -
\left[ 1 - (\lfrac{\epsilon }{2}) \log \left( 2\pi e^{-\gamma}
\right) \right]/\pi \epsilon + {\cal O}(\epsilon ).$
A renormalized coupling constant $g$ may be introduced
by the equation
\begin{eqnarray} \label{8.87}
\frac{1}{g_0 \mu ^\epsilon } - b_\epsilon \equiv \frac{1}{g},
\end{eqnarray}
so that
\begin{eqnarray} \label{8.89}
\frac{1}{{N}} v(\Delta ) = \frac{\mu ^\epsilon }{2 }
\left\{ \frac{\Delta^2}{g} + b_\epsilon \Delta^2
\left[ 1 - \left( \frac{\Delta }{\mu }\right) ^\epsilon
\right] \right\}.
\end{eqnarray}
Extremizing this we obtain
either $ \Delta_0=0$ or
a nonzero $ \Delta_0$ that
solves the gap equation
\begin{eqnarray}
{1} =g_0 \,\mbox{tr\,} (1) \int
\frac{d^Dp}{(2\pi )^2} \frac{1}{p^2+\Delta_0^2},
\label{8.161}\end{eqnarray}
in the following form:
\begin{eqnarray} \label{8.90}
1-\frac{g^*}{g} = \frac{D}{2} \left(
\frac{\Delta_0 }{\mu } \right) ^\epsilon,
\end{eqnarray}
where $g^*=-1/b_ \epsilon\approx \pi \epsilon$.
In the limit $N \rightarrow \infty$ this result is exact.
On the other hand in the opposite limit of low N the system
starts to perform fluctuations around saddle-point solution
and in order to describe this system properly one should
go beyond mean-field approximation and study
propagator of the $\theta$-field - where $\theta$ is
the phase of the order parameter.
Let us consider first the case $ \epsilon=0$
where the collective field theory
consists of complex field $ \Delta$
with O($2$)-symmetry $ \Delta = |\Delta| e^{i\theta}$.
Such a system possesses macroscopic excitations
of the form of vortices and antivortices that
attract each other by a logarithmic
Coulomb potential.
It is known \cite{kt} that in a such
field theory involving a pure phase field $\theta(x)$,
with a Lagrange density
\begin{equation}
{\cal L}=\frac{ \beta}{2}[\partial \theta(x)]^2,
\label{@modelld}\end{equation}
where $ \beta$ is the stiffness of the $\theta$-fluctuations,
there is a Kostrelitz-Thouless transition when the stiffness falls below $
\beta_{\rm KT}={2/\pi}$ \cite{kt}.
Let us return to Gross-Neveu model. Performing an expansion
around saddle point solution one can find
propagator of the $\theta$-field when $\epsilon =0$ \cite{gn1,W}:
\begin{eqnarray}
G_{\theta\theta}
& \approx &
\frac{i}{N}\frac{4\pi}{q^2}+{\rm regular~ terms}.
\label{8.190xx}\end{eqnarray}
Comparing this
with the
propagator
for the model Lagrange density (\ref{@modelld})
\begin{equation}
G_{\theta\theta}
=
\frac{1}{ \beta} \frac{i}{q^2}
\label{@propcpomp}\end{equation}
we identify the stiffness $ \beta=N/4\pi$.
The pair version of the chiral Gross-Neveu model
has therefore a vortex-antivortex pair breaking transition
if $N$ falls below the critical value
$ N_c=8$ \cite{gn1}.
Consider now the model
in $2+ \epsilon $
dimensions
where pairs form
when renormalized coupling constant becomes
larger than the critical value $g=g^*\approx \pi \epsilon$.
In this case the expression
for the stiffness of phase fluctuations reads \cite{gn1}
\begin{equation}
\beta
=\frac{N}{4\pi} \left(1 -\frac{g^*}{g}\right).
\label{@stiffn}\end{equation}
What implies a KT transition in the neighborhood of two
dimensions \footnote{There is a misleading statement about 3D
case in \cite{gn1}} at:
\begin{equation}
N_c\approx 8
\left(1 -\frac{g^*}{g}\right)^{-1},
\label{@}\end{equation}
The resulting phase diagram is shown in
Fig.~\ref{ncofg.tps}.
\begin{figure}[tb]
~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~
\input Ncofg.tps
\caption[]{The two transition lines in the $N-g$-plane
of the chiral Gross-Neveu model in $2+ \epsilon$ dimensions. In order
to stress difference between the local gap (i.e. ``pseudogap")
and the order parameter analogous to the ``superconductive" gap we
denote by $M$ {\it modulus} of the order parameter ($M=|\Delta_0|$).
In this model $M$ plays the role of the ``quark" mass.
For $ \epsilon=0$, the vertical transition line coincides with the $N$-axis,
and the solid hyperbola degenerates into a horizontal line at $N_c=8$.
In the limit $N \rightarrow \infty$ generation of the quark mass
happens simultaneously with ``phase ordering" transition.}
\label{ncofg.tps}\end{figure}
In the chiral formulation of the same model in $2+\epsilon$ dimensions,
the ``pseudogap" phase has
chiral
symmetry
in spite of a nonzero spontaneously generated ``quark mass" $M=|\Delta_0| \neq 0$.
This phase is directly related to the pseudogap
phase of a strong-coupling superconductor - where
there are Cooper pairs but there is no symmetry breakdown
due to violent phase fluctuations.
The reason why this is possible is that
the ``quark mass"
depends only on $| \Delta_0|$,
thus allowing for arbitrary phase fluctuations
preserving chiral symmetry.
It is very easy to see that the solid hyperbola in
Fig.~\ref{ncofg.tps}
is {\it not} simply the proper (albeit approximate)
continuation of the
vertical line for smaller $N$.
There are two simple arguments.
One is formal: For infinitesimal $ \epsilon$
the first transition lies precisely at $g=g^*=\pi \epsilon$
for {\em all\/} $N$,
so that the horizontal transition line is clearly distinguished
from it (stiffness of the phase fluctuation in the regime
${g^*}/{g}\rightarrow 0$,
just like in the case of superconductor, reaches plateau
value that does not depend on the coupling
strength and $\epsilon$). The other argument is physical
and also has a clear analogy in the corresponding phenomena
in superconductivity.
If $N$ is lowered at some very large $g$, the binding energy of the
pairs {\em increases with $1/N$\/}
\{in
two dimensions,
the binding energy
is
$4M\sin^2[\pi/2(N-1)]\}$.
It is then
impossible that the
phase fluctuations on the horizontal branch of the transition line,
which are low-energy excitations,
unbind the strongly bound pairs.
Accuracy of the ``BCS" scenario in the limit $N \rightarrow
\infty$ is clearly seen from the form of phase stiffness which
has a factor $N$ that ``freezes" the phase fluctuations in this limit
and thus all the physics is essentially governed by the size of the
gap modulus.
The $2+1$-dimensional Chiral Gross-Neveu model \cite{park}
also exhibits an analogous behavior at finite temperature \cite{gn2}
where a similar effect is governed by thermal fluctuations.
At finite $N$ the temperature of KT transition deviates from
mean-field temperature of the gap modulus formation,
however in D=2+1 the phase diagram is substantially
different from the phase diagram of the same model
in $D=2+\epsilon$ at $T=0$ (see detailed
discussion in \cite{gn2})\footnote{It should be emphasized that the
existence of the
pseudogap phase in 2+1 dimensions due to thermal
fluctuations in the Chiral Gross-Neveu model can not be rigorously proven in contrast
to $2+\epsilon$-dimensional case discussed in this section
(when there are two small parameters $\epsilon$ and $1/N$).
The 2+1 dimensional problem lacks a small parameter
which would allow to estimate accurately the
position of $T^*$ at very small $N$,
however it can be argued that rough low-$N$ estimates
show the appearance of thermal-fluctuations-induced pseudogap phase
in this model, especially pronounced at $N\leq4$. }.
\section{Chiral fluctuations in the the NJL model at zero temperature}
Recently an attempt was made
\cite{kb} to generalize to the NJL model
the nonlinear-sigma approach for description of chiral
fluctuations proposed in \cite{gn1,sc}.
The authors \cite{kb}
claimed that at $N_c=3$ the NJL
model does not display spontaneous symmetry breakdown
due to chiral fluctuations.
We show below that the
NLSM approach does not allow
to prove that chiral symmetry is
always restored
by fluctuations in the NJL model at $N_c=3$.
Below we also discuss differences from the chiral GN model,
where the NLSM approach
allows one to reach a similar conclusion at low $N$.
The Lagrangian of the NJL model reads \cite{NJLM}
\begin{equation}
\mbox{${\cal L}$}=\bar{\psi}
i\partial\!\!\!/
\psi+\f{g_0}{2N_c}\left[
\left(
\bar{\psi}\psi
\right)^2+\left(
\bar{\psi}\ld{a}i\gd{5}\psi
\right)^2
\right].
\label{NJLModel}
\end{equation}
The three $2\times2$-dimensional
matrices $ \ld{a}/2$, generate the fundamental representation
of flavor $SU(2)$, and are normalized by
$\mbox{tr\,} (\ld{a}\ld{b})=2\delta_{ab}$.
One can introduce Hubbard - Stratonovich fields
$ \sigma$ and $\pi_a$:
\begin{equation}
\mbox{${\cal L}$}=\bar{\psi}\left(
i\partial\!\!\!/-\sigma-i\gd{5}\ld{a}\pi_a
\right)
\psi-\f{N_c}{2g_0}\left(
\sigma^2+\pi_a^2
\right).
\label{hsnjl}
\end{equation}
After integrating out quark fields, following
a standard mean-field variation procedure
one can choose the pseudoscalar solution $\pi_a$ to vanish
and the scalar solution $\sigma\equiv M$ to be given by a gap equation:
\begin{equation}
\f{1}{g_0}=i(\mbox{tr\,}_f 1)(\mbox{tr\,}_{\gamma} 1)\int
\f{d^Dp}{(2\pi)^D}\f{1}{p^2-M^2}
\label{gap1}
\end{equation}
The momentum integral is regularized by means of a cutoff $\Lambda$.
\comment{
Mean-field $N_c\rightarrow \infty$
treatment gives an effective action
\begin{equation}
\Gamma (\rho)=- \Omega [\Delta v( \rho )+v_0]
\label{EffectivePotential}
\end{equation}
where $\Omega$ is the spacetime volume, and $v_0$
is energy density of the symmetric state,
whereas
\begin{eqnarray}
&&\Delta v(\rho)=\f{N_c}{2}\Bigg\{
\f{1}{g_0}\rho^2-\f{2}{(2\pi)^2}\Bigg[
\f{\rho^2\Lambda^2}{2}
+\f{\Lambda^4}{2}\ln\left(
1+\f{\rho^2}{\Lambda^2}
\right)\nonumber\\
&&\mbox{}-\f{\rho^4}{2}\ln\left(
1+\f{\Lambda^2}{\rho^2}
\right)
\Bigg]
\Bigg\}
\label{lambdapotential}
\end{eqnarray}
the mean-field condensation energy at {\it constant}
$ \sigma ^2+\pi_a^2\equiv \rho ^2$.
The momentum integral is regularized by means of a cutoff
$\Lambda$.
The condensation energy is extremal
at
$ \rho = {M}$ which solves the {\em gap equation\/}
\begin{eqnarray}
\f{1}{g_0}&=&\f{2}{(2\pi)^2}
\left[
\Lambda^2- {M}^2\ln\left(
1+\f{\Lambda^2}{{M}^2}
\right)
\right].
\label{lambdagap}
\end{eqnarray} }
The constituent quark mass ${M}$
in the limit $N_c\rightarrow \infty$
is analogous to the superconductive gap
in the BCS limit of the theory of superconductivity.
\comment{
In the paper \cite{kb} following to our
previous considerations of sigma-model approach
for description of the symmetry breakdown in 3D
superconductors and Gross-Neveu model \cite{sc} \cite{gn1},
it was suggested that in order to account for dynamic
chiral fluctuation in NJL model at zero temperature one should
set up 4D O(4) sigma-model.
Authors of \cite{kb} came to conclusion
that resulting stiffness of the effective 4D O(4) sigma model
is too small and
thus effective sigma model is
always in disordered phase due to strong dynamic
chiral fluctuations in the regime when $N_c=3$.
Let consider a regime of finite number of $N_c$.
Then fields start to perform fluctuations
around the extremal value $( \sigma ,\pi_a)=( M,0)$.
We can expand action in small
deviations from mean-field solution.}
At finite $N_c$ one can study fluctuations
around the saddle point solution.
The quadratic terms of expansion
around the saddle point are:
\begin{eqnarray}
{\cal A}_0[\sigma',\pi'] = \f{1}{2}\!\int\!\!d^4q\!\left[
\left(
{ \pi'_a(q) \atop
\sigma'(q)}
\right)^T
\left(
{G_{\pi}^{-1} \ \ \ 0 \atop \
\ 0 \ \ \ \ \ G_{\sigma}^{-1}}
\right)\left(
{\pi'_a(-q)\atop
\sigma'(-q)}
\right)
\right],
\label{ao}\end{eqnarray}
where
$(\sigma',\pi'_a)\equiv(\sigma- {M},\pi_a)$
and $G_{\sigma,\pi}^{-1}$
are the inverse bosonic propagators.
\comment{
\begin{equation}
G_{\sigma}^{-1} =
N_c\left[ 2\times2^{D/2} \int
\f{d^4p_E}{(2\pi)^4}\f{(p_E^2+p_Eq_E - M^2)}
{(p_E^2 + M^2)[(p_E+q_E)^2 +M^2]}
- \f{1}{g_0}\right];
G_{\pi}^{-1} =
N_c\left[ 2\times2^{D/2} \int
\f{d^4p_E}{(2\pi)^4}\f{(p_E^2+p_Eq_E + M^2)}
{(p_E^2 + M^2)[(p_E+q_E)^2 +M^2]}
- \f{1}{g_0}\right].
\end{equation}
In the above expression one should introduce
a momentum cutoff $\Lambda_2$.} Implementing a momentum
cutoff $\Lambda$, we can write
$G_{\pi,\sigma}^{-1}$ for small $q_E$ as:
\begin{eqnarray}
\!\!\!\!\!\!G_{\pi}^{-1}\!\approx\!\!-\f{N_c}{(2\pi)^2}
\!\left[
\ln\left(
1\!+\!\f{\Lambda^2}{{M}^2}
\right)
\!-\!\f{\Lambda^2}{\Lambda^2\!+\!{M}^2}\right]
\!q_E^2\!\equiv\!\!-Z({M}/\Lambda)q_E^2; \ \ \ \ \ \ \ \ \
G_{ \sigma }^{-1}\!\approx\!\!
-Z({M}/\Lambda)(q_E^2+4{M}^2).
\label{SigPropStiffLambda}
\end{eqnarray}
In analogy to $3D \ XY$-model approach to
strong-coupling superconductivity \cite{sc}
the authors of \cite{kb} introduced
a unit vector field
$n_i\equiv (n_0,n_a)\equiv(\sigma,\pi_a)/ \rho $
and set up an effective nonlinear sigma-model
\begin{equation}
{\cal A}_0[n_i]= \f{\beta}{2}\int d^4x
[\partial n_i(x)]^2.
\label{@prop}
\end{equation}
The prefactor $\beta=M^2 Z(M/\Lambda)$,
that follows from Eqs. (\ref{ao}) and
(\ref{SigPropStiffLambda})
plays the role of the stiffness
of the unit field fluctuations.
Now let us observe
that from the arguments given in
\cite{kb} it does not follow that the NJL model necessarily
remains in a chirally symmetric phase
at $N_c=3$.
At first, in contrast to the
($2+\epsilon$)-dimensional case, discussed in \cite{gn1},
one can not make unfortunately, any similar calculations in a
closed form in $3+1$-dimensions because this
theory is not renormalizable.
It was already observed in \cite{cut}-\cite{bub} that
the cutoff of meson loops cannot be set equal to the cutoff
for quark loops and thus the $1/N_c$ corrected theory
\cite{bub} possesses two
independent parameters that may be adjusted at will.
We present another argument
of a different nature
rooted in the nonuniversality of the
critical stiffness of a NLSM in four dimensions,
which does not allow one
to reach the conclusion of \cite{kb}
in the framework
of the NLSM approach.
Our observation
also applies to the NLSM
description of precritical fluctuations in general systems.
It also allows us to show that the additional
cutoff discussed below can not be related to the inverse coherence length
of the radial fluctuations in the effective potential as suggested in
\cite{prl,kb}.
The authors of \cite{kb} by deriving
$G_{\pi, \sigma}$ have essentially extracted two
characteristics from the initial system:
the stiffness of the phase fluctuations in the degenerate minimum
of the effective potential
and the mass of the radial fluctuation. However, knowledge of these
characteristics does not allow one in principle to judge
if directional fluctuations will destroy
long range order or the system will possess a BCS-like
phase transition.
The reason is that the critical stiffness of the nonlinear sigma
model is not an universal quantity in $3+1$-dimensions.
So in principle knowledge of the stiffness of NJL model
is not sufficient for finding the
{\it position} of the phase transition in the effective
nonlinear sigma model.
The situation is just like that in a Heisenberg
magnet, where the critical temperature depends
on the stiffness along with lattice spacing and lattice
structure. Thus if one is given only a
stiffness coefficient one can not determine the
temperature of the phase transition\comment{Due to this
reason one can not refer to lattice simulation for
finding the value of the critical stiffiness as it was done in \cite{kb,prl}
since implicitly these numerical values contain information
of lattice structure and are not universal.}.
The situation is
in contrast to the 2D case where the position of a
KT-transition can be deduced from the stiffness coefficient \cite{kt}.
\comment{
In two dimensions the critical stiffness of the O(2) nonlinear
sigma model is a universal quantity and is given by
$\beta_{KT}=2/\pi$ \cite{kt}, so by comparing
it with the stiffness coefficient
derived from the initial theory
(the phase stiffness of the chiral GN model in $D=2$ is
$\beta = N/4\pi$ ),
one can judge if the
system has enough phase stiffness to
preserve quasi-long range order
as we have shown in \cite{gn1}
That is, one can determine
the number of field components N
that is needed to remain below the
position of the Kosterlitz-Thouless transition.
This is
in contrast to the $D=3+1$ case.}
Let us recall a procedure for
expressing the critical stiffness
of the O(4)-nonlinear sigma model via
an additional parameter:
one can relax the constraint $n_i^2 =1$
and introduce an extra integration over the
lagrange multiplier $\lambda$, rewriting Eq. (\ref{@prop}) as:
$(\beta/2) \int d^4x
\left\{ [\partial n_i(x)]^2+ \lambda \left[ n_i^2(x)-1\right] \right\}$.
Integrating out the $n_i(x)$-fields, yields:
\begin{equation}
{\cal A}_0[\lambda]=-\beta\int d^4x \f{\lambda(x)}{2}+\f{N_n}{2}\mbox{Tr\,}\ln\left[
-\partial^2+\lambda(x)
\right],
\label{newaction}
\end{equation}
where $N_n$ is the number of components of $n_i(x)$ and $\mbox{Tr\,}$ denotes the
functional trace.
This yields a gap equation:
\begin{equation}
\beta=N_n\int \f{d^4k}{(2\pi)^4}\f{1}{k^2+\lambda} .
\label{@secge}\end{equation}
The model has a phase transition at a critical stiffness
that depends on an unspecified additional cutoff parameter that
should be applied to the gap equation:
\begin{equation}
\beta^{\rm cr}=N_n\int \f{d^4k}{(2\pi)^4}\f{1}{k^2}.
\label{CriticalStiff}
\end{equation}
For example, in the case of magnets the additional cutoff
needed in Eq. (\ref{CriticalStiff})
is naturally related to the lattice spacing.
In \cite{prl} a
criterion was proposed that states that one can relate the inverse
coherence length extracted from radial fluctuations in an
effective potential of an initial theory to the cutoff
in the integral (\ref{CriticalStiff}) so that all the parameters
in a theory would be expressed from quantities
derived from an initial model, and thus this
modified model possesses a universal
critical stiffness.
However, unfortunately there is no reason
for relating the cutoff needed in Eq. (\ref{CriticalStiff})
to the coherence length of the modulus fluctuations
and moreover we show that
this procedure leads in general to unphysical consequences.
It should also be observed that it
does not make the theory consistent
anyway because of the following
circumstance:
The authors of \cite{kb} suggested that, ``since pions in the symmetry
broken
phase are composite they are not ``defined" over the
length scales much shorter than the inverse
binding energy of the pair wave function which is
equal to $2 M$". Thus the authors of \cite{kb} performed integral
in (\ref{CriticalStiff}) up to cutoff $4M^2$.
However, since,
as suggested in \cite{kb},
the pion fields are not ``defined" over the
length scales much {\it shorter} than the inverse
binding energy this may only serve as an
estimate for ``{\it upper} boundary" of what would
be the universal critical stiffness value.
So, unfortunately
one can not make the conclusion of absence of symmetry breakdown
in such a modified theory by observing that
stiffness derived from the initial model is
smaller than the maximal
possible value of would be universal critical stiffness.
It was also supposed in \cite{prl}
that the relation of the coherence length to the cutoff
in the equation (\ref{CriticalStiff})
yields a universal criterion
for judging the
nature of symmetry breakdown in
general physical systems.
There is a simple counterexample:
in the case of a
strong-coupling superconductor, the
effective nonlinear sigma model
that describes
fluctuations in a degenerate valley
of the effective potential is a 3D XY-model. In
the continuous case it is a free field
theory and has no phase transition at all.
The phase transition appears only in the lattice theory
and, of course its temperature
depends on the lattice spacing.
\comment{
In the case of NJL model there are however no length scales
that can be used to estimate position of the
phase transition of the effective nonlinear sigma model.
There was made an attempt
of finding such a scale in the paper \cite{kb},
namely it was suggested that
since the pion fields are composite, "they are not
defined over length scales much shorter
than the inverse binding energy of the pair wave function
which is equal to $2M$".
Following to this assumption the authors of \cite{kb}
performed the integral in Eq.~(\ref{CriticalStiff})
up to the cutoff $2M$ proposing it as an estimate
for the critical stiffness of the effective sigma model.
However, unfortunately, there is no reason to use this scale for
such estimate. In fact even if to consider following to \cite{kb}
that "pion fields are not defined over the
scales {\it shorter} than that"
it would be an estimate for upper boundary of the
value of the critical
stiffness and thus one could not find from it
if the directional fluctuations in NJL can restore
chiral symmetry at low $N_c$.
Moreover if
do not take it into the account and proceed
exactly along the same
lines as in \cite{kb} this construction
would lead to incorrect result of absence of superconductivity
in a strong-coupling superconductor too:
}
As we discussed above with increasing coupling strength the
low-temperature phase stiffness of the effective 3D XY model tends
to a plateau value
$J=n/4m$, where $n$ and $m$ are the density and the mass
of fermions \cite{sc}. Thus the temperature of the phase transition
of the effective 3D XY-model is
\begin{equation}
T_c^{3D XY} \propto \frac{n}{m} a,
\label{xy}
\end{equation}
where
$a$ is the lattice spacing.
To be careful one should remark that accurate analysis shows
that
a strong coupling superconductor possesses two characteristic length
scales: size of the Cooper pairs which tends to zero with increasing
coupling strength,
and a coherence length that tends to infinity with increasing coupling
strength as the system evolves towards a weakly nonideal gas
of true composite bosons \cite{R,pist}.
First if one relates the constant $a$ in (\ref{xy}) to the size of
the Cooper pairs
following to the arguments of \cite{kb}
one will come to an incorrect conclusion of the absence of
superconductivity in strong-coupling superconductors,
in a similar way as the authors
of \cite{kb} came to the conclusion
of the nonexistence of symmetry
breakdown
in the NJL model.
This is in a direct
contradiction with
behavior of the strong coupling superconductors discussed above.
Second, if one attempts to relate $a$ in (\ref{xy})
to the second length scale
of the theory, namely, the true coherence length, which
tends to infinity with increasing coupling strength,
then one will also come to a qualitatively incorrect conclusion
\cite{rem}.
Thus, in general, the nonlinear sigma model approach
for precritical fluctuations possesses an additional
fitting parameter, which is the
cutoff in the gap equation (\ref{CriticalStiff}),
which can not be related to the inverse coherence length
extracted
from radial fluctuations in an effective potential.
Thus within the NLSM approach one can
not prove if the NJL model displays
necessarily the directional fluctuations-driven
restoration of chiral
symmetry at low $N_c$.
\section{Chiral fluctuations at finite temperature and a modified NJL
model with a pseudogap}
This section is based on the paper \cite{mprd}.
The authors of \cite{kb} employed NLSM
arguments in an attempt to show that the NJL model cannot serve
the study of the chiral symmetry breakdown. We have shown in the
above that this conclusion appears to be incorrect
since the critical stiffness
in 3+1-dimensions is not a universal quantity and one
has an additional fitting parameter.
This is an inherent feature of the discussed NLSM approach
in 3+1 dimensions
(compare with the
cutoffs
discussions in nonrenormalizable models in a different approach
\cite{cut}-\cite{bub}, and also \cite{bl}).
The above circumstance allows one to fix the critical
stiffness from phenomenological considerations.
However, we argue below that, what is missed
in \cite{kb} is that, in principle, the low-$N_c$
fluctuation instabilities, when properly treated,
have a clear physical meaning.
Moreover, we argue that
one can employ a NLSM
for describing the chiral fluctuations
(e.g. at finite temperature),
provided that special care is taken of
the additional cutoff parameter.
Indeed, it was already discussed in the literature that at finite temperatures
the chiral phase transition should be accompanied by
developed fluctuations (\cite{ht,ht2} and references therein).
We argue that this
process at low $N_c$ should give rise to a
phase analogous to the pseudogap phase that may be conveniently
described within a nonlinear sigma model approach. There are
indeed other ways to describe these phenomena,
however the NLSM
approach seems to be especially convenient. The description of
the two-step chiral phase transition and appearance of the
intermediate phase requires one to study the system at the next-to-mean-field
level. Unfortunately, the NJL model is not renormalizable
and does not allow one to make any conclusions about the
importance of fluctuations in a closed form \cite{bub}.
On the other hand, a pseudogap phase
is a general feature of Fermi systems with composite bosons.
The NLSM construction discussed below, because of
its nonperturbative nature, can not be
regarded as a regular approximation but may be
considered as a tractable modification of the NJL model that
has a pseudogap.
One can also find an additional
motivation for employing these arguments in the fact that
the NLSM approach allows one
to prove the existence of the phase analogous to pseudogap phase
in the chiral GN model \cite{gn1,gn2}
which is the closest relative of the NJL model.
Also the NLSM approach works well for the description of
precritical fluctuations in superconductors \cite{sc} - where
essentially the same results have been obtained
with different methods and in different models.
We stress that
these phenomena are a general feature
of any Fermi system with attraction.
Also, to a certain extent similar crossovers
are known in a large variety of condensed matter systems.
In particular, besides superconductors
we might mention the
exitonic condensate in
semiconductors,
Josephson junction arrays,
itinerant and local-momentum theories
of magnetism and ferroelectrics.
Let us now consider the chiral fluctuations in the NJL model
at finite temperature.
Then,
following standard dimensional reduction
arguments [see e.g. \cite{Wil}], the chiral fluctuations should be
described by a $3D \ O(4)$-sigma model.
Thus one has
the following gap equation for the effective NLSM
(i.e. the finite temperature analog of (\ref{@secge})):
\begin{equation}
\frac{J_T}{T} = N_n \int \frac{d^3 k}{(2\pi)^3} \frac{1}{k^2+\lambda}
\label{gt}
\end{equation}
The temperature of the phase transition of the three dimensional
classical $O(4)$ sigma
model with stiffness $J_T$ is expressed via the additional parameter
${\tilde \Lambda}_T$ needed in (\ref{gt}) as :
\begin{equation}
T_c = \frac{\pi^2}{2}\frac{J_T}{{\tilde \Lambda}_T}
\label{tc}
\end{equation}
The stiffness of thermal fluctuations
$J_T$ can be readily extracted from the NJL model.
At finite temperature the inverse bosonic propagator of the collective
field $\pi$ for small $q$ can be written as:
\begin{eqnarray}
G^{-1}_\pi &= & -2^{D/2} N_c \int \frac{d^3p}{(2\pi)^3} \sum_n
\left[ \frac{T}{(p^2+M^2+\omega_n^2)^2}\right] q^2 =
\nonumber \\
& - & 2^{D/2} N_c \int \frac{d^3 p}{ (2 \pi)^3}
\left[ \frac{1}{8}
\frac{1}{(p^2+M^2)^{3/2}}
\tanh
\left( \frac{\sqrt{p^2+M^2}}{2 T}\right)
-\frac{1}{16 T}\frac{1}{p^2+M^2} \cosh^{-2}
\left(
\frac{\sqrt{p^2+M^2}}{2T}
\right)
\right] q^2 = \nonumber \\
& - & K (T,\Lambda_T, M, N_c) q^2,
\label{st0}
\end{eqnarray}
where $\Lambda_T$ is a momentum cutoff.
The propagator (\ref{st0}) gives the
gradient term that allows one to set up an effective
classical $3D ~ O(4)$-nonlinear sigma model:
\begin{equation}
E=\frac{J_T (T, \Lambda_T, M, N_c)}{2} \int d^3 x [\partial n_i (x)]^2,
\label{Hei}
\end{equation}
where
\begin{equation}
J_T(T, \Lambda_T, M, N_c) = K(T, \Lambda_T, M, N_c) ~M^2(T,\Lambda_T)
\label{st1}
\end{equation}
is the stiffness of the thermal fluctuations in the degenerate
valley of the effective potential. The temperature -dependent quark
mass $M$ that enters this expression
is given by a standard mean-field gap equation which
also should be regularized with the cutoff $\Lambda_T$:
\begin{equation}
\f{1}{g_0}=2\times2^{D/2}\sum_n
\int \f{d^3p}{(2\pi)^3}\f{T}{p^2+ M^2 +\omega_n^2}.
\label{gap}
\end{equation}
It can be easily seen that,
when we approach the temperature $T^*$
where the mass $M(T)$ becomes zero, the stiffness
$J(T,\Lambda_T, M, N_c)$ also tends to zero.
Formula (\ref{Hei}) defines a generalized Heisenberg model
with a {\it temperature-dependent stiffness coefficient}.
The position of the phase disorder transition in a such
system should be determined self-consistently by
solving the system of equations for $T_c$ and $M(T_c)$.
Apparently just as
in a superconductor
with a pseudogap,
the phase transition in such a system is a competition
between the thermal depletion of the gap modulus (this
roughly corresponds to thermal pair breaking in a superconductor)
and the process of thermal excitations of
the directional fluctuations in the
degenerate minimum of the effective potential. The
``BCS" limit corresponds to the situation where $T^*$ merges with
$T_c$ and it is easily seen that this scenario always holds true at
$N_c \rightarrow \infty$. That is, at infinite $N$ the mean-field
theory is always accurate just as BCS theory works well in
weak-coupling superconductors. In the framework
of this NLSM construction, at low $N_c$ the scenario
of the phase transition
depends on the choice of $M(0), \Lambda_T $,
and ${\tilde \Lambda}_T$, which should be fixed from
phenomenological considerations.
\section{Conclusion}
The precursor pairing fluctuations
is a general feature of any Fermi system
with composite bosons
and it is the dominant region of a phase
diagram of strong-coupling and low-carrier-density superconductors.
At the moment it is a subject of increasing interest in
different branches of physics.
In the first part of this paper we briefly
outlined the nonlinear sigma model
approach to this phenomenon in superconductors
in two and three dimensions.
In the second part we discussed similar phomena
in the chiral Gross-Neveu and Nambu--Jona-Lasinio
models. This discussion should have relevance for
hot QCD and color superconductors.
We also note that
in some sense similar phenomena are known
in a large variety of condensed matter systems,
in particular, besides superconductors
we can mention itinerant and local-momentum theories
of magnetism, exitonic condensate in
semiconductors, ferroelectrics and Josephson junction arrays \cite{knot}.
We would like to stress that the main purpose of this paper is to
summarize present discussions of precursor
fluctuations in the Gross-Neveu and Nambu--Jona-Lasinio
models. We illustrated the discussion
with a few examples from superconductivity, outlining
occurrence of similar phenomena in several arbitrarily chosen
models of superconductors with pseudogaps. Thus this paper
can not be regarded as a review of this phenomenon in superconductivity
which evolved recently to a very large branch of condensed matter
physics. Thus our references to the papers on superconductivity
are by definition incomplete, for more complete set of references reader
may consult corresponding reviews on superconductivity [e.g. the review
\cite{ranrev} ].
\comment{
Indication of possible importance of the pseudogap
concept in particle physics is the mentioned above existence of this
phenomenon in the chiral Gross-Neveu model at low $N$.
Even though these results can not be directly generalized to
NJL model, one can guess that in analogy
to 3D XY-model approach to strong-coupling
and low carrier density superconductivity, one
can set up a nonlinear 3D O(4)-sigma
model with temperature depended stiffness
coefficient as a toy model for QCD at finite temperatures
that would possess two characteristic temperatures
corresponding to discussed in this paper $T_c$ and $T^*$.
Speaking about the BCS-BEC crossover,
precritical fluctuations and the pseudogap phase,
it should be noted as well that
in some sense similar phenomena are known
in a large variety of condensed matter systems,
in particular, except for superconductors
we can mention itinerant and local-momentum theories
of magnetism, exitonic condensate in
semiconductors, ferroelectrics and Josephson junction arrays.}
\begin{acknowledgments}
The author is grateful to Prof. T. Appelquist, Dr. V. Cheianov,
Prof. S. Hands, Prof. H. Kleinert, Prof. A.J. Niemi and Prof.
L.C.R. Wijewardhana
for discussions and/or
useful remarks, and to Prof. T. Hatsuda and Prof. D. Blaschke
for communicating references.
\end{acknowledgments}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,535
|
Q: URL is showing MVC controller name twice I am working with .net mvc and angular js.
HomeController
public ActionResult Index()
{
return View();
}
In angular controller there is a service to get all the item details
[HttpGet]
public JsonResult GetItemDetails()
{
// return item list
}
If I run the solution without opening HomeController in visual studio I found 404 error saying that resource is not found.
One more thing I have noticed in network is the called URL is "/Home/Home/GetItemDetails".
But when I open HomeController in visual studio everything will be working fine.
How it is happening ?
A: Make sure your routing values and the URL you called are one and the same.
routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
new { controller = "Home", action = "Index", id = "" } // Parameter defaults
);
For this, the URL looks like /Home/Index
Verify the URL you are using in Angular and you may be adding the controller name twice somewhere.
A: $http.get("Home/GetItemDetails") this is the URL I had used to call controller method.
I missed slash in this
So the solution is $http.get("/Home/GetItemDetails")
Thank you.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,140
|
Q: Adapt Visual Studio solution to Linux I have a .NET project (previously developed in Visual Studio 2022, Windows) that is based on an sln solution that encompasses several subprojects. The structure is like:
*
*App.PluginAdmin.sln
*App.PluginAdmin/
*
*App.PluginAdmin.csprog
*Program.cs
*...
*PluginBase/
*
*PluginBase.csprog
*Plugin.cs
*...
*Plugins/
*
*PluginOne/
*
*PluginOne.csprog
*Plugin.cs
*...
*PluginTwo/
*
*PluginTwo.csprog
*Plugin.cs
*...
*...
While the App.PluginAdmin.sln solution is like:
Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 17
VisualStudioVersion = 17.2.32516.85
MinimumVisualStudioVersion = 10.0.40219.1
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "App.PluginAdmin", "App.PluginAdmin\App.PluginAdmin.csproj", "{E7294DF7-4D11-4927-BA1B-8D1DE18643E1}"
ProjectSection(ProjectDependencies) = postProject
{0C027B86-6972-46DD-85B0-...} = {0C027B86-6972-46DD-85B0-...}
{10C01D7F-1269-425F-ABE7-...} = {10C01D7F-1269-425F-ABE7-...}
{15224486-DDDB-4A24-9BB5-...} = {15224486-DDDB-4A24-9BB5-...}
{1A2A2546-A7E0-47D8-B3E7-...} = {1A2A2546-A7E0-47D8-B3E7-...}
EndProjectSection
EndProject
Project("{9A19103F-16F7-4668-...}") = "PluginBase", "PluginBase\PluginBase.csproj", "{DBFFB8B1-E30C-43A2-9B30-...}"
EndProject
Project("{2150E333-8FDC-42A3-...}") = "Plugins", "Plugins", "{FC7A31E0-F80B-4406-94A5-B4732A303C10}"
EndProject
Project("{9A19103F-16F7-4668-...}") = "PluginOne", "Plugins\PluginOne\PluginOne.csproj", "{B7C0DF85-D76E-42C0-8EBC-...}"
EndProject
Project("{9A19103F-16F7-4668-...}") = "PluginTwo", "Plugins\PluginTwo\PluginTwo.csproj", "{CC49BC06-0E8D-40DD-9222-...}"
EndProject
...
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{E7294DF7-4D11-...}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{E7294DF7-4D11-...}.Debug|Any CPU.Build.0 = Debug|Any CPU
{E7294DF7-4D11-...}.Release|Any CPU.ActiveCfg = Release|Any CPU
{E7294DF7-4D11-...}.Release|Any CPU.Build.0 = Release|Any CPU
{DBFFB8B1-E30C-...}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{DBFFB8B1-E30C-...}.Debug|Any CPU.Build.0 = Debug|Any CPU
{DBFFB8B1-E30C-...}.Release|Any CPU.ActiveCfg = Release|Any CPU
{DBFFB8B1-E30C-...}.Release|Any CPU.Build.0 = Release|Any CPU
{0C027B86-6972-...}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{0C027B86-6972-...}.Debug|Any CPU.Build.0 = Debug|Any CPU
{0C027B86-6972-...}.Release|Any CPU.ActiveCfg = Release|Any CPU
{0C027B86-6972-...}.Release|Any CPU.Build.0 = Release|Any CPU
...
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(NestedProjects) = preSolution
{0C027B86-6972-...} = {64426B8E-3B99-...}
{B7C0DF85-D76E-...} = {FC7A31E0-F80B-...}
.....
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {5E4FC05C-...}
EndGlobalSection
EndGlobal
Currently, I have to start the project on GNU/Linux in order to develop it from here. So I need to be able to launch the application from the terminal using commands. However, I don't know how to do it using the solution, since in Visual Studio it is very simple (choosing the solution and hitting play).
I have read here that I can use make to automate the compilation of all elements, but **how do I do this**? What would be the process? Doing dotnet buildeach project and then run it withdotnet run`?
I have tried to use cmake-converter like stated here:
cmake-converter -s ProjectingPlus.Monitor.sln
But I always get the output:
0.000037 processes count = 8
0.000062 warnings level = 2
0.044046 1> ERR : Unknown project type at /home/user/git/App/ProjectingPlus.Monitor/ProjectingPlus.Monitor.csproj
0.044841 7> ERR : Unknown project type at /home/user/git/App/Plugins/PluginOne/PluginOne.csproj
0.045149 8> ERR : Unknown project type at /home/user/git/App/Plugins/PluginTwo/PluginTwo.csproj
.....
0.054685 Conversion of /home/user/git/App/App.PluginAdmin.sln finished
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,050
|
In Sacramentum Caritatis, Pope Benedict XVI calls us to find relief for our hunger in 'the food of truth', inviting us into the sacrificial meal, from where we draw our very life. Pope Benedict asks all people to draw near to God's love, because it holds the deepest desire of the human heart. In this book, Anna Burke ponders some of the images and metaphors from Sacramentum Caritatis and offers resources for personal and communal prayer and for group reflection.
Part One: Prayers At Table leads us on a journey through the Mass. The prayers focus on the various liturgical moments of the sacred rite and help to heighten our awareness of the communion of all creation in the Sacred Mystery. Part Two: Stories At Table explores some key texts from Scripture which direct us to the table of Communion.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,720
|
The 2018 VFA Futsal Jamboree will take place on the weekend of December 22-23rd. Here are the details on how to get involved.
With over 150 teams participating in last year's edition the Futsal Jamboree is one of the larger youth events in the country and is sure to be a really fun event for all club and academies across lower mainland BC.
It is sanctioned by BC Soccer, costs only $240 per team, and will guarantee three games for every participating team and will take place at the Richmond Olympic Oval, the venue that was used for the 2010 Vancouver Winter Games.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,128
|
Clubs affiliated with Capital Football in the Australian Capital Territory (ACT) – and surrounding areas of New South Wales – competed in 2014 for the Capital Football Federation Cup. Teams from the same Club playing in multiple divisions were allowed to compete. This knockout competition was won by Tuggeranong United, their 4th title.
Winning the 2013 Federation Cup also entitled Tuggeranong United to become the ACT's sole qualifier for the 2014 FFA Cup, entering at the Round of 32. The original intention from Capital Football was that the Federation Cup would be the qualifying tournament to determine the ACT qualifier, but match scheduling issues meant the 2014 winner would not be decided until after the qualifier needed to be named. To overcome this Capital Football announced that the 2014 winner of the ACT's pre-season competition was to be the ACT's qualifier in 2014 , but Tuggeranong United successfully appealed to qualify them as the ACT's FFA Cup entrant for 2014.
Schedule
First round
22 teams from various divisions of the ACT State Leagues, as well as 4 Masters teams, entered into the competition at this stage. Matches in this round were played on 6 April.
Byes:–UC Pumas (3), Weston Creek (3), ANU FC (SL2) (4), Narrabundah (8), Gungahlin Juventus (5), and Gungahlin United Masters 2 (-).
Second round
Matches in this round were played on 13 April.
Third round
8 Clubs from the ACT National Premier League (Tier 2) entered into the competition at this stage. Matches in this round were played between 2–23 May.
Quarter-finals
All matches in this round were completed by 3 July.
Semi-finals
Matches in this round were played between 25 July and 1 August.
Final
The winner also qualified for the 2014 FFA Cup Round of 32.
References
2013 in Australian soccer
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,689
|
{"url":"http:\/\/motls.blogspot.com\/2012\/05\/iran-agw-useless-summits-in-baghdad.html","text":"## Friday, May 25, 2012 ... \/\/\/\/\/\n\n### Iran, AGW: useless summits in Baghdad, Bonn\n\n...and African migrants in Israel...\n\nIn recent days, two major cities starting with a \"B\" witnessed futile negotiations about important topics.\n\nIn Bonn, the ex-capital of West Germany, some bureaucrats were trying to prepare a post-Kyoto agreement to regulate the greenhouse gases that could be signed in Durban in December 2012. Surprisingly for them, they found out that:\n\nRich-poor divide reopens at UN climate talks\nSome poor countries were promised by the environmentalist i.e. Marxist activists that they would be given piles of wealth after the civilized countries are deconstructed with the help of the global warming lies.\n\nSuddenly, some Western negotiators realized at least the fact that whether or not it is a good idea to reduce the CO2 emissions, the CO2 emissions can't decrease if the developing countries will keep on developing: the CO2 production would simply shift to the currently developing world.\n\nOf course, the poor folks just wanted the money and some of them wanted to damage the West: these were the only reasons why they would support the climate change hysteria a few years ago. They don't have the slightest interest in hurting themselves. So the talks can't lead anywhere.\n\nOne should try to appreciate what kind of money is proposed to be wasted for these insane policies. Greece is an astronomical black hole that eats a hundred of billions of euros of foreign \"loans\" (wasted donations) every year. I don't have to explain to you \u2013 especially if you possess some stocks \u2013 how devastating the impact of this small, unstable, and irrelevant piece of land on the world economy already is.\n\nHowever, the carbon regulation policies already eat more than that. In the background, something is hurting the global economy at least as intensely as Greece and no one talks about it. Of course, these policies haven't achieved any reductions of the CO2 emissions yet \u2013 not even a reduction of the exponential growth rate of these emissions. To do so, the expenses would have to increase at least by an order of magnitude. That's financially equivalent to imagining that all of Spain, Portugal, and Italy will become exactly as hopeless as Greece. Try to visualize how the world economy would behave in this way; some people have no problem to deliberately propose such suicidal policies. (While these policies already eat hundreds of billions of dollars from the world economy every year, the negotiators don't know where to find a few million dollars for their next conference.)\n\nAnd this would only start to make a \"detectable\" impact on CO2 emissions. Maybe.\n\nGlobal warming is being praised for saving the once-rare British Argus butterfly (and pretty much all other species in the world) from extinction. What a horror. To compensate for this fact, green activists repeat that global warming drives polar bears \u2013 whose number has quadrupled in recent 50 years \u2013 to extinction.\n\nNeedless to say, even if the growth of CO2 emissions were visibly slowed down or reverted, there won't be any observable and demonstrable influences of this \"success\" on the world climate, at least not for the next 50 years, and even if there were such influences, they wouldn't have a positive sign.\n\nThe people who still discuss carbon regulation policies in 2012 are insane psychopaths and must be treated in this way.\n\nInstead of being given beds in asylums, these psychopaths improve our lives by stories about the\u00a0warming by 3.5 \u00b0C and similar stories every day. Note that not only an error margin is absent (the error margin of all such figures in the IPCC report is of order 100 percent so it is really absurd to list two significant figures); they don't even say what is the period of time in which the temperature increment could be 3.5 \u00b0C and what's the probability that this speculation \"could\" turn out to be right (be sure that it's nearly zero for any time scale shorter than two centuries).\n\nRational reasoning has totally evaporated from these segments of the society. These negotiators, the activists pumping hormones into the movement, and the would-be journalists who hype all this nonsense in the media are dangerous lunatics.\n\nIncidentally, if you want some good news, Bavaria's stock exchange will end the trading of carbon indulgences next month as the prices dropped 60% in a year and the trading volumes converged to practically zero.\n\nIran\n\nThere are of course other dangerous lunatics in the world, too. Baghdad, the Iraqi capital, has seen another round of useless and failed talks between Iran and the Western powers that want to stop the ever more dangerous enrichment of uranium in the Persian nuclear facilities. Today, the U.N. will announce that Iran has beaten its previous record and enriched the uranium up to 27 percent.\n\nMake no mistake about it: there are lots of lunatics in Iran, starting from Ali Khamanei, the bigot-in-chief who officially calls himself the supreme leader. Some of them literally believe that Allah, the virtual bigot-in-chief, will give them all the virgins and demands that they eliminate the infidels. On the other hand, I must say that the rumors that Iran is thinking rationally are based on a rational core, too.\n\nThere is of course a lot of civilian, non-religious, non-military activity in Persia. It's a country that has been Westernized to a large extent, especially during the Shah's reign. Much of it hasn't evaporated yet. However, I am not talking just about semi-sensible semi-socialist industrialists or scientists who work at random places of Persia (some of whom I know).\n\nI am talking about their negotiators, too.\n\nIt seems to me that they have totally understood the emptiness of much of the Western politics, its inability to see the most obvious things, its focus on the form instead of the substance. So they sent a Saeed Jalili, a Persian top-tier security bureaucrat, to Baghdad. I think this guy in particular may be more rational than many of his Western counterparts. His job is simple: to have a nice time with third-rate politicians such as the EU foreign minister Catherine Ashton and make her sure that everything is fine and we may talk and talk and talk. We may talk next month in Moscow, too. It's so pleasant.\n\nMeanwhile, the Iranians know very well that any delay is Persia's incremental victory. The reason is simple: the centrifuges are running. The research that allows Persia to develop and install ever more dangerous missiles with ever more dangerous warheads is recording some progress every month, too.\n\nIt seems to me that lots of people similar to Catherine Ashton simply have no clue. They're always ready to be led into thinking that the problem may be delayed by another month or another year and we're making progress towards security. Except that a rational observer, much like the Iranian religious bigots, sees very clearly that the progress is zero and, when the developments in Iran are counted, it's negative (=positive from the Islamic Republic's vantage point) after every new round of negotiations.\n\nPeople like Jalili are capable of dancing with their counterparts such as Ashton in circles. Ashton enjoys the dancing so she believes that she's moving forward. But she's not. She's rotating in circles and the Persian centrifuges are doing the same thing. The only difference is that by rotating in circles, the Persian centrifuges are pushing the Iranian power-thirsty bigots forward while Ashton's dances with Jalili don't move us forward. One may actually see that Ashton herself has moved backwards; Iranians noticed that Ashton had a more conservative, Islamists-pleasing wardrobe than she had last time. Will she wear a burqah in Moscow next month?\n\nCross the Jordan River [the river of all the hopes], a courageous enough 1968 song celebrating emigration of Czechs after the 1968 Soviet Occupation (although formally talking about the ancient Jewish exodus), by Ms Helena Vondr\u00e1\u010dkov\u00e1 who gradually became a pillar of the pro-Brezhnev totalitarian entertainment industry (but who made a big comeback after the 1989 Velvet Revolution, anyway). Funnily enough, the \"peasant\" with the mule e.g. around 1:33 is Mr Waldemar Matu\u0161ka, a singer who really did emigrate in the 1980s. Ms Helena's fate was very different from that of her fellow singer, Ms Marta Kubi\u0161ov\u00e1, a much stronger moral character who was really harassed by the communists, had to work as a clerk in a vegetable shop, and couldn't protect her youth and image so well... I propose the song as an anthem for the Israeli (and American?) pilots who will be given the task to bomb Iran.\n\nAmerica has declared that it is ready to strike Iran and I don't believe there is any room left for any other than military solution at this moment. Persia should be urged to evacuate the vicinities of the labs, especially in Qom that may have to be treated by thermonuclear devices due to the stubborn, annoying, and dodgy fortification of the facility, and Obama should distribute the orders. If this operation \"just\" delayed the Iranian nuclear warheads by 5 years, it would be an amazing success that should be regularly repeated every few years.\n\nIsrael: immigration\n\nMeanwhile, the ordinary people in Israel aren't thinking about Iran too much. Instead, what they see are African migrants. An Iranian nuclear bomb sent to Israel would be very visible but it doesn't mean that there can't exist much more gradual but possibly more harmful processes that may harm Israel \u2013 and, analogously, others.\n\nWhat happened with Africa and Israel?\n\nLast year, the West failed to protect Hosni Mobarak, one of the most enlightened leaders of an Arab country. In fact, most of the people in the West didn't even have the will to do so. Several groups of Islamic bigots (groups that, fortunately, dislike each other as well) took over Egypt, together with lots of anarchy. In particular, the Sinai Peninsula is a mess, too.\n\nLots of African migrants from Eritrea, Sudan, and a few other countries are using this chaotic land adjacent to Israel \u2013 with the help of local Bedouins \u2013 to penetrate into the most advanced country in the region. Of course that for most of them, the reasons are purely economical. In this respect, Israel faces the very same immigration problems as those we know from many Western countries. As the crime rate goes up (rapes etc.) and there are many other problems, strong words are being used. The Zionist dream is disappearing, and so on.\n\nIsrael is trying to erect a physical barrier on the border with the Sinai Peninsula (African workers are sometimes employed for the hard work) but it's not something you can do within an hour. A thousand of new immigrant enters the Jewish state every day, sending the total number to 60,000 or so.\n\nLet me say something. I believe that a civilized country should be able to deal with a problematic minority that represents 1% of the population. Many other countries are forced to solve similar or larger problems. So if the increase stopped, I do believe that decent Israeli should stop their hysteria about the Africans, too. I agree that the inability to tolerate 1% of a traditionally poorer race is a symptom of racism. On the other hand, one should introduce some policies that will guarantee that the percentage won't grow substantially above 1%. The Israeli Arabs already represent a sufficiently large source of problems for the country and Israel just can't afford \"too much more\" of such problems.\n\nWorries about visible things \u2013 such as the nuclear Holocaust in the Middle East \u2013 may be popular and attractive for our imagination. But we shouldn't forget that there are many other, less spectacular and gradually creeping events and trends that may screw our lives and our civilization equally or more efficiently. So people should stop fighting (and wasting time and money for) virtual problems such as global warming and they should start to seriously discuss genuine problems such as destructive technologies in the hands of uncontrollable bigots or the uncontrollable inflow of illegal immigrants into various countries.\n\nAnd that's the memo.\n\n#### snail feedback (1) :\n\nreader Synapismos \u00e9 a \u03c6\u03ac\u03bb\u03b1\u03b3\u03be forward marche ou marx? said...\n\nyou believe that a civilized country like israel or germany should be able to deal with a problematic minority that represents 1% of the population?\n\nif they throw the jewish nigger's from eritreia in the oven\nthey have enough money to do the same with the ai'rabs and obama?\n\nIsrael don't have CO2 tax?\n\nif they use the Polpot method that is a nice sink for the CO2","date":"2016-10-21 13:03:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3128454089164734, \"perplexity\": 2650.5691307717707}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988718278.43\/warc\/CC-MAIN-20161020183838-00523-ip-10-171-6-4.ec2.internal.warc.gz\"}"}
| null | null |
Ansembourg (Luxemburgs: Aansebuerg, Duits: Ansemburg) is een plaats in de gemeente Helperknapp en het kanton Mersch in Luxemburg. Ansembourg telt 40 inwoners (2001). Het dorp is gelegen aan de Eisch. Ondanks het kleine aantal inwoners kent Ansembourg twee kastelen: de Burcht Ansembourg ("het oude kasteel") op een heuvel en het Château d'Ansembourg ("het nieuwe kasteel") in het rivierdal.
Ansembourg maakte deel uit van de gemeente Tuntange totdat deze op 1 januari 2018 fuseerde met de gemeente Boevange-sur-Attert tot de huidige gemeente Helperknapp.
Helperknapp
Plaats in Luxemburg (land)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,365
|
\section{Introduction}
A discrete-time dynamical system is specified by a function $f$ from a space $X$ to itself. One of the most important problems in the study of dynamical systems is to understand the limiting or asymptotic behavior of such systems; in particular, the limiting distribution of the sequence of iterates $x, f(x), f(f(x)), \dots$. Combinations of such distributions give rise to {\em invariant measures} of the system, which describe the asymptotic behavior in statistical terms. The invariant measures are supported on {\em invariant sets}, which provide a topological description instead. Together, these invariant objects completely characterize the asymptotic behavior of the system.
Ideally, given a dynamical system, we would like to be able to decide properties of its asymptotic behavior or to compute (to within some approximation) the invariant objects describing it. Unfortunately, in many cases, simple questions regarding this behavior are undecidable \cite{Mo91,AsMalAm95, Wol02,Ka09} and computing the relevant invariant objects is impossible \cite{BY,BraYam07, GalHoyRoj07c,BBRY}. The general phenomenon behind these results is that, for many classes of dynamical systems, it is possible to `embed' a Turing machine $M$ in the dynamical system so that achieving the algorithmic task we are concerned with is equivalent to deciding whether $M$ halts.
In \cite{BGR}, Braverman, Grigo, and Rojas showed that under the introduction of noise to a dynamical system (for almost all `natural' noise functions), the set of invariant measures becomes computable. Moreover, in many cases, this set is computable efficiently. Specifically, they show (Theorem C in \cite{BGR}) that if the noise is Gaussian then there is a unique invariant measure $\mu$; moreover, if $f$ is polynomial-time integrable (convolutions of polynomials in $f$ with polynomial functions can be integrated in polynomial-time), then computing this invariant measure to within precision $\delta$ can be done in time $O({\mathrm{poly}}(\log 1/\delta))$.
The purpose of this paper is to investigate the space complexity of computing the invariant measure of a noisy dynamical system. The algorithm given in Theorem C of \cite{BGR} for computing the invariant measure requires space $O({\mathrm{poly}}(\epsilon^{-1}\log\delta^{-1}))$. By applying (and developing) techniques for space-bounded computation, we prove (in Section \ref{sect:ubnd}) the following refinement of Theorem C that runs in space polylogarithmic of that of the original algorithm (albeit at a cost of a quasi-polynomial increase in the running time).
An additional assumption that we need to make to obtain tight results is that the function $f$ itself is not a source of additional space complexity. We say that $f$ is $S+$log-space integrable, if it is possible to integrate the convolution of powers of $f$ with polynomial functions with precision $\zeta$ in space $O(S+\log\log 1/\zeta)$ (see Section \ref{sec:prelim} for a precise definition)\footnote{In fact, the conclusion of Theorem~\ref{ubnd} follows even if these convolutions
can be computed in space ${\mathrm{poly}}(S+\log\log 1/\zeta)$.}.
\begin{theorem}\label{ubnd}
Let $X=[0,1]$.
If the noise $p_{f(x)}^{\epsilon}(\cdot)$ is Gaussian, and $f$ is $(\log \frac{1}{\epsilon})+$log-space integrable, then the computation of the invariant measure $\mu$ at precision $\delta$ can be done in space $O\left({\mathrm{poly}}\left(\log\frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right)\right)$.
\end{theorem}
We can also replace the assumption that $f$ is $(\log \frac{1}{\epsilon})+$log-space integrable with the assumption that $f$ is both log-space computable (i.e. that its values can be computed to within error $\zeta$ in space $O(\log \log 1/\zeta)$) and analytic with bounded Taylor series coefficients. In particular, we show that
\begin{theorem}\label{ubndalt} Let $X=[0,1]$.
If the noise $p_{f(x)}^{\epsilon}(\cdot)$ is Gaussian, and $f$ is log-space computable, smooth, and (for some $\eta>0$) satisfies $|\partial^{k}f(x)| \leq k!\eta^{k}$ for all $x$, then the computation of the invariant measure $\mu$ at precision $\delta$ can be done in space $O\left({\mathrm{poly}}\left(\log \eta + \log\frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right)\right)$.
\end{theorem}
For the sake of simplicity, in this note we focus on the case where $X = [0,1]$ (as in \cite{BGR}). Both Theorems \ref{ubnd} and \ref{ubndalt}, however, can be generalized to the case where $X=[0,1]^{d}$. For fixed $d$, the space bounds in Theorems \ref{ubnd} and \ref{ubndalt} remain the same; for variable $d$, the space bounds gain an extra factor of ${\mathrm{poly}}(d)$. We explain this in further detail in Remark \ref{rem:dimension}.
In order to generalize Theorem C of \cite{BGR} and prove Theorems \ref{ubnd} and \ref{ubndalt}, we require a method to exponentiate $n$ by $n$ matrices up to powers potentially as large as $2^{{\mathrm{poly}}(n)}$
in space polylogarithmic in $n$, (the traditional method of iterative squaring only works for powers up to ${\mathrm{poly}}(n)$). To the best of our knowledge, there is no known existing solution to this problem that operates in polylogarithmic space. In Section \ref{sect:matpow}, we present such a solution based on approximating $M^{E}$ via $p(M)$ for some low degree polynomial $p$ (Theorem \ref{matpowers}). This theorem is arguably the main technical innovation of this paper:
\smallskip
\noindent
{\bf Theorem~\ref{matpowers}.}
{\em Given an $n$ by $n$ matrix $M$ whose entries are given up to precision $2^{-{\mathrm{poly}}(n)}$ and an integer exponent $E = O(2^{{\mathrm{poly}}(n)})$, there exists an algorithm that computes $M^{E}$ in space $O({\mathrm{poly}}(\log n))$ to within precision $2^{-{\mathrm{poly}}(n)}$ if $||M^{E}|| \leq 2^{n}$ (and otherwise reports that $||M^{E}|| > 2^{n}$).
}
\smallskip
Finally, in Section \ref{sect:lbnd}, we prove a corresponding lower bound, showing that this upper bound is tight; the space complexity of computing the invariant measure of such a system cannot be further reduced.
\begin{theorem}\label{lbnd}
Any algorithm that can compute the invariant measure $\mu$ to within precision $\delta$ of a dynamical system with Gaussian noise kernel $p_{f(x)}^{\epsilon}(\cdot)$ and analytic transition function $f(x)$ (that uniformly satisfies $|\partial^{k}f(x)| \leq k!\eta^k$ for some $\eta = {\mathrm{poly}}(\epsilon^{-1})$) requires space at least $\Omega\left(\log\frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right)$.
\end{theorem}
These theorems provide evidence for the Space-Bounded Church-Turing thesis (SBCT), introduced by the authors in \cite{BRS}. The SBCT roughly states that a physical system with ``memory'' $M$ is only capable of performing computation in the complexity class $\mathbf{SPACE}(M^{O(1)})$, where memory is a measure of the amount of information the system can preserve from one timestep to the next. For dynamical systems with Gaussian noise of variance $\epsilon$, one can show that $M = O(\log \frac{1}{\epsilon})$; the SBCT thus suggests that such dynamical systems are limited to computations in $\mathbf{SPACE}({\mathrm{poly}}\log\frac{1}{\epsilon})$, which is implied by Theorem \ref{lbnd}. See Appendix \ref{sec:sbct} for more details.
\subsection{Open Problems}
In this paper we focus exclusively on the case where the noise is Gaussian. It is straightforward to adapt the proofs in this paper to other choices of noise functions. It remains unclear, however, how the space complexity of computing the invariant measures of $f$ depends precisely on the noise function. More specifically, we would like to be able to answer the following problem.
\begin{problem}
Can we associate with every random perturbation a value $M$ so that computing the invariant measure of a dynamical system with this noise can be done in space $O({\mathrm{poly}}(\log M + \log\log 1/\delta))$, and moreover that this is tight: given a random perturbation with value $M$, there is some function $f$ whose invariant measures subject to this random perturbation take space $\Omega({\mathrm{poly}}(\log M + \log\log 1/\delta))$?
\end{problem}
For the case when the random perturbation is Gaussian with variance $\epsilon^2$, this note shows that it suffices to take $M = \epsilon^{-1}$ (or, in the $d$-dimensional case, $M = \epsilon^{-d}$).
\subsubsection*{Acknowledgments}
We would like to thank Eric Allender for his advice on space-bounded computation.
\section{Preliminaries} \label{sec:prelim}
\subsection{Discrete-time dynamical systems}
We begin by giving a brief description of the relevant aspects of the theory of discrete time dynamical systems, largely following the notation of \cite{BGR}. For a complete treatment see for instance \cite{Wal82,Pet83,Man87}.
A \textit{dynamical system} is a metric space $X$ representing the set of possible states along with a map $f: X\rightarrow X$ representing the transitions between states. Given an initial state $x \in X$ of the system, the \textit{trajectory} of $x$ is the sequence $\{x, f(x), f(f(x)), \dots\}$. To avoid certain technical pathologies that can arise, throughout the course of this paper we will assume that $X$ is a compact Lebesgue-measurable subset of $\mathbb{R}^d$ and the function $f$ is continuous.
Given a probability measure $\mu$ over $X$, we can define the pushforward of $\mu$ under $f$ via $(f\mu)(A) = \mu(f^{-1}(A))$ for all events $A \subset X$. A probability measure $\mu$ is \textit{invariant} for the dynamical system if $f\mu = \mu$.
In this note, we focus on the case of dynamical systems with noise. Denote by $P(X)$ the set of Borel probability measures over $X$ under the weak convergence topology. A \textit{random perturbation $\mathcal{S}$ of $f$} is given by a family $\{Q_{x}\}_{x\in X} \in P(X)$ of probability measures over $X$ for each point in $x$ which each represent the `noise' at that point. Then, instead of a deterministic trajectory, $\mathcal{S}$ induces a Markov chain over $X$, where $\mathrm{Pr}[x_{t+1} \in A] = Q_{f(x_{t})}(A)$ for all Borel sets $A \subset X$. Likewise, the pushforward of a probability measure $\mu \in P(X)$ under $\mathcal{S}$ is defined by $(\mathcal{S}\mu)(A) = \int_{X}Q_{f(x)}(A)d\mu$. As before, $\mu$ is an \textit{invariant measure} of the random perturbation $\mathcal{S}$ of $f$ if $\mathcal{S}\mu = \mu$.
For simplicity, throughout this paper we will assume that the domain $X$ is the $d$-dimensional cube $[0,1]^d$ (and for the majority of the discussion, we will focus on the case where $d$ equals $1$). Moreover, in all of our examples we will be concerned with the case of Gaussian noise with variance $2\epsilon^2$, where the measure $Q_{x}$ is defined (in the case $d=1$) by the probability density function
\begin{equation*}
K_{\epsilon}(y, x) = C_{\epsilon}(x)\frac{1}{\epsilon\sqrt{2\pi}}\exp(-(y-x)^2/2\epsilon^2)
\end{equation*}
\noindent
where $C_{\epsilon}(x)$ is a normalization factor so that $K_{\epsilon}(y,x)$ has measure 1 over $[0,1]$; specifically, $C_{\epsilon}(x)$ is given by
\begin{equation*}
C_{\epsilon}(x) = \left(\int_{0}^{1}\frac{1}{\epsilon\sqrt{2\pi}}\exp(-(y-x)^2/2\epsilon^2)dy\right)^{-1}
\end{equation*}
\noindent
Note that if $\mu(x)$ is the density function of a probability measure on $[0,1]$, then the density $\rho = \mathcal{S}\mu$ of the pushforward measure under $\mathcal{S}$ is given by
\begin{equation*}
\rho(x) = \int_{0}^{1} \mu(y) K_{\epsilon}(f(y), x) dy
\end{equation*}
For this reason (following the notation of \cite{BGR}), we will write $K_{f}(y, x)$ as shorthand for $K_{\epsilon}(f(y), x)$. We will also write $p_{f(x)}^{\epsilon}$ to denote the family $Q_{f(x)}$ of probability measures for this dynamical system with noise (i.e. the probability measure induced by $K_{\epsilon}(y,f(x))$).
\subsection{Space-bounded computation}\label{sect:spacebound}
The space complexity classes we consider in this paper are very small; they are (poly)logarithmic in the size of the output. To this end, we review some classic results from space-bounded computation.
A function $f$ is \textit{log-space computable} if it can be computed by a Turing machine with a read-only input tape, a one-way write-only output tape, and a read-write work tape of size $O(\log n)$. The following functions are known to be log-space computable:
\begin{enumerate}[(a)]
\item The composition of a constant number of log-space functions. The composition of two functions $f(g(x))$ can be performed by dividing the work tape into two tapes of size $O(\log n)$, and using the second tape to compute the desired bit of $g(x)$ whenever it is required for $f(g(x))$. By induction, this can be extended to any constant-depth composition of log-space functions. \label{comp}
\item Addition of ${\mathrm{poly}}(n)$ $n$-bit integers. This can be done with via the standard grade-school addition algorithm (with some attention paid to how to represent carries). \label{add}
\item Multiplication of two $n$-bit integers. This follows from \ref{add} via the standard algorithm for long multiplication. \label{mult}
\item Multiplication of two $n$ by $n$ matrices, each of whose entries is an $n$-bit integer. This follows from \ref{add} and \ref{mult} (each entry is the sum of $n$ products of two $n$-bit integers). \label{matrixmult}
\item Division of two $n$-bit integers. This result is due to Chiu, Davida, and Litow \cite{CDL95}. The main idea of their proof is to represent both numbers in terms of their values modulo various small primes, perform the arithmetic operations modulo these small primes, and reconstruct the result via the Chinese Remainder Theorem. \label{div}
\item Multiplication of ${\mathrm{poly}}(n)$ $n$-bit integers. This can be done via the same technique of Chinese Remainder representation described in \ref{div} and is also described in \cite{CDL95}.\label{itmult}
\item Arithmetic operations on real numbers up to precision $2^{-{\mathrm{poly}}(n)}$. This follows from the preceding results (we need only additionally keep track of the location of the decimal/binary point, which requires at most a logarithmic amount of extra space). \label{realarith}
\item Computation of factorials and binomial coefficients. This follows from \ref{div} and \ref{itmult}. \label{facts}
\item Taking products, powers (with exponents of size ${\mathrm{poly}}(n)$), and compositions of polynomials with degree ${\mathrm{poly}}(n)$ and coefficients of size ${\mathrm{poly}}(n)$. This follows from \ref{facts}, \ref{itmult}, and \ref{add}. \label{polys}
\item Computing $\exp$, $\log$, and $\arctan$ of numbers to within precision $2^{-{\mathrm{poly}}(n)}$. This was originally shown by Alt in \cite{Alt84} (in all cases it suffices to approximate these functions via some sufficiently long prefix of their Taylor series). \label{funcs}
\item Computing $x^{E}$ to within precision $2^{-{\mathrm{poly}}(n)}$, where $x$ is a real number provided to precision $2^{-{\mathrm{poly}}(n)}$ and $E$ is a ${\mathrm{poly}}(n)$-bit integer. Again, this was shown by Alt in \cite{Alt84} and essentially follows from \ref{funcs} by writing $x^{E} = \exp(E\log x)$. For completeness, we provide a derivation of this fact in Appendix \ref{sect:numpow}.
\xdef\@savedenum{\the\c@enumi\relax}
\end{enumerate}
There are some operations which, while we do not know how to perform in a logarithmic amount of space, we do know how to perform in a polylogarithmic amount of space. These include:
\begin{enumerate}[(a)]
\global\c@enumi\@savedenum
\item
Computing the composition of logarithmically many log-space functions. By similar logic as \ref{comp} above, this can be done in space $O(\log^2 n)$.
\item \label{matpow}
Computing $M^{{\mathrm{poly}}(n)}$ to within precision $2^{-{\mathrm{poly}}(n)}$, where and $M$ is an $n$-by-$n$ matrix of ${\mathrm{poly}}(n)$-bit entries. This can be done in space $O(\log^2 n)$ via repeated squaring (this is essentially the logic behind Savitch's theorem, see \cite{Sav70}).
\item \label{det}
Computing the determinant (and more generally, the coefficients of the characteristic polynomial) of an $n$-by-$n$ matrix $M$ with ${\mathrm{poly}}(n)$-bit integer entries. This can be done in space $O(\log^2 n)$ via a result of Buntrock, Damm, Hertrampf, and Meinel (see \cite{BDMH92}).
\item
Inverting an $n$-by-$n$ matrix $M$ with ${\mathrm{poly}}(n)$-bit integer entries. This follows from \ref{det} by expressing the inverse of $M$ in terms of the determinant of $M$ and cofactor matrix of $M$.
\item
Computing all roots of a polynomial of degree $n$ with ${\mathrm{poly}}(n)$-bit integer coefficients to within precision $2^{-{\mathrm{poly}}(n)}$. This follows from a result of Neff and Reif; their algorithm uses space $O(\log^7 n)$ (see \cite{NR96}). \label{roots}
\item
Computing the eigenvalues of an $n$-by-$n$ matrix $M$ with ${\mathrm{poly}}(n)$-bit integer entries to within precision $2^{-{\mathrm{poly}}(n)}$. This follows from \ref{roots} and \ref{det} by computing the roots of the characteristic polynomial of $M$. \label{eigenvals}
\end{enumerate}
It should be noted that many of these operations, when restricted to polylogarithmic space, require (to the best of our knowledge) superpolynomial running times. In particular, the above algorithm for matrix exponentiation (and more generally, Savitch's algorithm for STCONN) requires time $O(2^{\log^2 n})$. It is open whether every function computable in polylogarithmic space can be computed simultaneously in polylogarithmic space and polynomial time. We therefore cannot ensure the same time bound as in the original statement of Theorem C in \cite{BGR}.
\subsection{Real computation}
Throughout the rest of the paper (and particularly in the next section) we will often have to work with binary representations of real numbers. We summarize in this section some common notation we use in the remainder of the paper.
A real number $x$ is \textit{given up to precision $2^{-n^c}$} if $x$ is given as an integer multiple of $2^{-n^c}$. We say $x$ is \textit{given up to precision $2^{-{\mathrm{poly}}(n)}$} if it is given up to precision $2^{-n^c}$ for some $c$. We further assume that all numbers given this way are also bounded above in magnitude by $2^{{\mathrm{poly}}(n)}$.
We say we can compute a function $f(x)$ up to precision $\delta$ if there is an algorithm which, when provided with $x$ up to a sufficiently high precision, computes a dyadic number $x'$ such that $|x'-f(x)| \leq \delta$. We say we can compute a function $f(x)$ up to precision $2^{-{\mathrm{poly}}(n)}$ in polylogarithmic (alternatively, logarithmic) space if, for each positive integer $c$, we can compute $f(x)$ up to precision $2^{-n^{c}}$ in space $O({\mathrm{poly}}(\log n))$ (alternatively, $O(\log n)$), where the degree of the polynomial is independent of $c$.
In the statement of Theorem \ref{ubnd}, we require that the function $f(x)$ is $(\log \epsilon^{-1})+\log$-space integrable. Formally, a function $f:\mathbb{R}\rightarrow\mathbb{R}$ is \textit{$S+\log$-space integrable}, if, given an interval $[a,b]$ (with $a$ and $b$ both given up to precision $2^{-{\mathrm{poly}}(n)}$) and a polynomial $p(x)$ of degree ${\mathrm{poly}}(n)$ whose coefficients are all given to precision $2^{-{\mathrm{poly}}(n)}$, it is possible to compute the integral $\int_{a}^{b} f(x)p(x)dx$ to within precision $2^{-{\mathrm{poly}}(n)}$ in space $O(S + \log n)$. In the higher dimensional case where $f$ is a function from $\mathbb{R}^{d}$ to $\mathbb{R}^{d}$, the interval $[a,b]$ is replaced by the box $[a_1, b_1]\times \dots \times [a_d, b_d]$.
Finally, we define what we mean by the computation of an invariant measure of a dynamical system. We say a measure $\mu'$ agrees with a measure $\mu$ up to precision $\delta$ if the total variation distance between $\mu$ and $\mu'$ is at most $\delta$. If measures $\mu$ and $\mu'$ are given by density functions, we will write $||\mu - \mu'||_{\infty}$ to denote the $L_{\infty}$ distance between the two density functions; note that since the size of our domain is normalized to $1$, if $||\mu - \mu'||_{\infty} \leq \delta$, then the total variation distance between $\mu$ and $\mu'$ is also at most $\delta$. We say we can compute a measure $\mu$ in space $O(S)$ if, for any interval $[a, b]$, we can approximate the weight of $\mu$ over $[a,b]$ to within precision $2^{-n}$ in space $O(S + \log n)$ (again, in the $d$-dimensional case, we replace the interval $[a,b]$ with the box $[a_1, b_1] \times \dots \times [a_d, b_d]$).
\section{Exponentiating matrices to large powers} \label{sect:matpow}
Iterative squaring allows us to compute powers of $n$-bit matrices up to exponents that are polynomial in $n$ in polylogarithmic space. Proving Theorem \ref{ubnd}, however, requires us to be able to exponentiate numbers and matrices up to exponents of size potentially exponential in $n$.
In this section we demonstrate how to raise matrices to exponentially large exponents using a polylogarithmic amount of space. In particular, we prove the following theorem.
(Throughout this section, we take the norm $||M||$ of a matrix $M$ to be the maximum norm, i.e. the maximum absolute value of an entry of $M$).
\begin{theorem}\label{matpowers}
Given an $n$ by $n$ matrix $M$ whose entries are given up to precision $2^{-{\mathrm{poly}}(n)}$ and an integer exponent $E = O(2^{{\mathrm{poly}}(n)})$, there exists an algorithm that computes $M^{E}$ in space $O({\mathrm{poly}}(\log n))$ to within precision $2^{-{\mathrm{poly}}(n)}$ if $||M^{E}|| \leq 2^{n}$ (and otherwise reports that $||M^{E}|| > 2^{n}$).
\end{theorem}
Our general approach will be to construct a polynomial $p(x)$ of degree at most $n$ such that, for each eigenvalue $\lambda$ of $M$, $p(\lambda) \approx \lambda^{E}$. It will then follow that $p(M) \approx M^{E}$.
We first show that we can reduce Theorem \ref{matpowers} to the case where $M$ is diagonalizable with $n$ distinct eigenvalues.
\begin{theorem}\label{nonsingdiag}
Given any $n$ by $n$ matrix $M$ whose entries are given up to precision $2^{-{\mathrm{poly}}(n)}$, an integer exponent $E \leq 2^{{\mathrm{poly}}(n)}$ (that satisfies $||M^{E}|| \leq 2^n$) and a precision $\delta = \Omega(2^{-{\mathrm{poly}}(n)})$, there exists an algorithm that computes in space $O({\mathrm{poly}}(\log n))$ a matrix $M_0$ with entries provided to precision $2^{-{\mathrm{poly}}(n)}$ such that $M_0$ has $n$ distinct eigenvalues and $||M^{E} - M_0^{E}|| \leq \delta$.
\end{theorem}
\begin{proof}
Let $D$ be the diagonal matrix $\mathrm{diag}(1, 2, 3, \dots, n)$, and set
\begin{equation*}
M(t) = M(1-t) + Dt
\end{equation*}
Let $p(t)$ be the discriminant of the characteristic polynomial of the matrix $M(t)$; that is, if $\lambda_i(t)$ are the roots of the characteristic polynomial of $M(t)$, then
\begin{equation}\label{eq:disc}
p(t) = \prod_{i<j} (\lambda_i(t)-\lambda_j(t))^2
\end{equation}
It is known that the discriminant of a polynomial $P(x)$ of degree $d$ can be computed as the determinant of a $(2d-1)$ by $(2d-1)$ matrix whose entries are coefficients of $P(x)$ (see for instance \cite{GKZ94}). Since the coefficients of the characteristic polynomial matrix are in turn polynomials in the entries of $M$, it follows that $p(t)$ is a polynomial in $t$. Moreover, by equation \ref{eq:disc}, scaling a matrix by some multiplicative factor $c$ multiplies the discriminant of the characteristic polynomial of this matrix by a factor of $c^{n(n-1)}$; it follows that the discriminant of the characteristic polynomial of a matrix is a homogeneous polynomial of degree $n(n-1)$ in the entries of the matrix, and therefore $p(t)$ has degree at most $n(n-1)$. Finally, since we can compute determinants and characteristic polynomials of matrices in polylogarithmic space (by remark \ref{det} in Section \ref{sect:spacebound}), we can compute $p(t)$ in polylogarithmic space.
Note that since $M(1) = D$, it follows that $p(1) = \prod_{i<j} (i-j)^2 \neq 0$, and therefore that $p(t)$ is not identically $0$. Now, let $t_0$ be the largest power of $2$ satisfying
\begin{equation*}
t_{0} = 2^{-e_0} \leq \frac{\delta}{100 n(n-1) 2^n E^2 ||D - M||}
\end{equation*}
\noindent
and consider the $n(n-1)+1$ values $t = kt_0$ where $k$ ranges from $0$ to $n(n-1)$ inclusive. Since $p(t)$ is a polynomial of degree $n(n-1)$ that is not identically $0$, it can have at most $n(n-1)$ roots, so for at least one of these choices of $k$, $p(t) \neq 0$. Since $p(t)$ is non-zero, no two eigenvalues of $M(t)$ are equal. On the other hand, for this value of $t$, note that
\begin{eqnarray*}
||M^{E} - M(t)^{E}|| &\leq & \left|\left| M^{E} - \left(M + (D-M)\frac{\delta k}{100 n(n-1) 2^{n} E^2 ||D - M||} \right)^{E}\right|\right| \\
&\approx & \left|\left| \frac{\delta k (D-M)}{100 n(n-1)2^{n} ||D-M||} \right| \right| ||M^{E-1}|| \\
& \leq & \dfrac{\delta}{100}
\end{eqnarray*}
\noindent
It therefore suffices to take $M_0 = M(t)$. Since $t_0 = 2^{-{\mathrm{poly}}(n)}$, the entries of $M_0$ are all given to precision $2^{-{\mathrm{poly}}(n)}$, as desired.
\end{proof}
We next cite the following technical lemma about the minimum distance between distinct eigenvalues of $M$.
\begin{lemma} \label{rootdist}
Let $p(x)$ be a degree $n$ polynomial whose coefficients are integers all with absolute value at most $A$. Then for any two distinct roots $r_i\neq r_j$ of $p(x)$,
\begin{equation}
|r_i - r_j| \geq 2nA^{-n^2}
\end{equation}
\end{lemma}
\begin{proof}
See \cite{Col01}.
\end{proof}
\begin{corollary}\label{eigenvaldist}
Let $M$ be an $n$ by $n$ matrix whose entries are provided to precision $2^{-{\mathrm{poly}}(n)}$ and are at most $2^{{\mathrm{poly}}(n)}$ in absolute value. Then for any two distinct eigenvalues $\lambda_i \neq \lambda_j$ of $M$, $|\lambda_i - \lambda_j| \geq 2^{-{\mathrm{poly}}(n)}$.
\end{corollary}
\begin{proof}
If the entries of $M$ are provided to within precision $2^{-a(n)}$, consider $2^{a(n)}M$. This is an integer matrix whose entries are all of size at most $2^{{\mathrm{poly}}(n)}$. It follows that the coefficients of the characteristic polynomial of this matrix have absolute value at most $2^{{\mathrm{poly}}(n)}$, and hence (by Lemma \ref{rootdist}),
\begin{equation*}
|2^{a(n)}\lambda_i - 2^{a(n)}\lambda_j| \geq 2n\left(2^{{\mathrm{poly}}(n)}\right)^{-n^2} = 2^{-{\mathrm{poly}}(n)}
\end{equation*}
\noindent
and hence
\begin{equation*}
|\lambda_i - \lambda_j| \geq 2^{-{\mathrm{poly}}(n)}
\end{equation*}
\end{proof}
Finally, we prove the following lemma bounding the size of the matrices related to the eigendecomposition of a matrix $M$.
\begin{lemma}\label{eigdecompbnd}
Let $M$ be an $n$ by $n$ non-singular matrix with distinct eigenvalues whose entries are provided to precision $2^{-{\mathrm{poly}}(n)}$, and let $D'$ be a diagonal matrix all of whose diagonal entries have absolute value at most $1$. Then if we write $M = U^{-1}DU$, the matrix $M' = U^{-1}D'U$ has entries at most $2^{{\mathrm{poly}}(n)}$.
\end{lemma}
\begin{proof}
Let $\lambda_1, \lambda_2, \dots, \lambda_n$ be the eigenvalues of $M$ (i.e. the diagonal entries of $D$), and let $\mu_1, \mu_2, \dots \mu_n$ be the diagonal entries of $D'$. Consider the polynomial $p(x)$ of degree at most $n-1$ which maps $\lambda_i$ to $\mu_i$ for each $i$. By the Lagrange interpolation theorem, we can write $p(x)$ as
\begin{equation*}
p(x) = \sum_{i=1}^{n}\prod_{j\neq i}\mu_i\dfrac{(x-\lambda_j)}{(\lambda_i - \lambda_j)}
\end{equation*}
By Corollary \ref{eigenvaldist}, for all $i\neq j$, $|\lambda_i - \lambda_j| \geq 2^{-{\mathrm{poly}}(n)}$. Combining this with the fact that $|\mu_i| \leq 1$ implies that all coefficients of $p(x)$ are at most $2^{{\mathrm{poly}}(n)}$ in absolute value.
Consider now the matrix $p(M)$. Note that since $p(D) = D'$, $p(M) = M'$. But since all the entries of $M$ are at most $2^{{\mathrm{poly}}(n)}$, the entries of $p(M)$ will be at most $2^{{\mathrm{poly}}(n)}$, and hence the entries of $M'$ are at most $2^{{\mathrm{poly}}(n)}$.
\end{proof}
We now proceed to prove Theorem \ref{matpowers}.
\begin{proof}[Proof of Theorem~\ref{matpowers}]
By Theorem \ref{nonsingdiag} we can assume without loss of generality that $M$ is diagonalizable with distinct eigenvalues. We begin by finding the eigenvalues of $M$. By remark \ref{eigenvals} of Section \ref{sect:spacebound}, it is possible in polylogarithmic space to compute the eigenvalues of $M$ to within any precision $2^{-{\mathrm{poly}}(n)}$.
Let $\lambda_1, \lambda_2, \dots, \lambda_n$ be the eigenvalues of $M$. For each $\lambda_{i}$, let $\tilde{\lambda}_i$ be our approximation to $\lambda_i$ (so that $|\tilde{\lambda}_i - \lambda_i| \leq 2^{-{\mathrm{poly}}(n)}$ for some choice of ${\mathrm{poly}}(n)$). We now construct via Lagrange interpolation the polynomial $p(x)$ such that for each $i$, $p(\tilde{\lambda}_i) = \tilde{\lambda_{i}}^{E}$ (note that by Theorem \ref{cpxpowers}, we can compute $\tilde{\lambda_{i}}^{E}$ to within any precision $2^{-{\mathrm{poly}}(n)}$ in logarithmic space). We wish to show that we can ensure (via approximating the roots with fine enough precision) that $|p(\lambda_{i}) - \lambda_{i}^{E}| \leq 2^{-{\mathrm{poly}}(n)}$ for any given choice of precision $2^{-{\mathrm{poly}}(n)}$.
To show this, first note that the Lagrange interpolation formula says that we can write $p(x)$ as
\begin{equation*}
p(x) = \sum_{i=1}^{n}\prod_{j\neq i}\tilde{\lambda}_i^{E}\dfrac{(x-\tilde{\lambda}_j)}{(\tilde{\lambda_i} - \tilde{\lambda_j})}
\end{equation*}
Recall that, by Corollary \ref{eigenvaldist}, for all $i\neq j$, $|\tilde{\lambda}_i - \tilde{\lambda_j}| \geq 2^{-n^{a}}$, for some constant $a$. In addition, $\tilde{\lambda}_i^{E}$ is at most $||M^{E}||$ which by our assumption is at most $2^{n}$. Hence, all the coefficients of $p(x)$ have magnitude at most $2^{n}\left(2^{-n^{a}}\right)^{-n} \leq 2^{n^{a+2}}$.
Next, note that if $p(x)$ is a polynomial of degree $d$ all of whose coefficients are at most $A$ in absolute value, then
\begin{eqnarray}
|p(x) - p(y)| &\leq & A \sum_{i=0}^{d} |x^{i} - y^{i}| \\
&=& A|x-y| \sum_{i=1}^{d}\left|\sum_{j=0}^{i-1}x^{j}y^{i-j}\right| \\
&\leq & d^2 A \max(|x|,|y|)^{d-1} |x-y|
\end{eqnarray}
Since $|\tilde{\lambda}_i - \lambda_i| \leq 2^{-n^{b}}$ for some constant $b$ and $|\lambda_i|^{d-1} \leq |\lambda_i|^{E} \leq 2^{n}$, then it follows that,
\begin{equation*}
|p(\lambda_{i}) - p(\tilde{\lambda}_i)| \leq n^2 2^{n^{a+2}-n^{b}+n}
\end{equation*}
Therefore, as long as we choose $b > a+2$, $|p(\lambda_{i}) - p(\tilde{\lambda}_i)|$ will be at most $2^{-O(n^{b})}$. Since $p(\tilde{\lambda}_i) = \tilde{\lambda}_i^{E}$, and since $|\tilde{\lambda}_i^{E} - \lambda_{i}^{E}| \approx E|\tilde{\lambda}_i - \lambda_i| \leq 2^{-n^{b}}|\tilde{\lambda}_i - \lambda_i|$, it follows that
\begin{equation*}
|p(\lambda_{i}) - \lambda_{i}^{E}| \leq 2^{-O(n^{b})} + E 2^{-n^{b}}
\end{equation*}
Since $E \leq 2^{{\mathrm{poly}}(n)}$, $E \leq 2^{n^{c}}$ for some $c$. For any $c'$, choosing $b = c+c'$ ensures that $|p(\lambda_i) - \lambda_{i}^{E}| \leq 2^{-n^{c'}}$, as desired.
Finally, consider the matrix $p(M)$. We claim that each entry of $p(M) - M^{E}$ has absolute value at most $2^{-{\mathrm{poly}}(n)}$. To see this, note that if we diagonalize $M$ as $M = U^{-1}DU$, where $D$ is a diagonal matrix containing the eigenvalues of $M$, then $p(M) - M^{E} = U^{-1}(p(D) - D^{E})U$. Each diagonal entry of $p(D) - D^{E}$ is of the form $p(\lambda_i) - \lambda_i^{E}$ and therefore by the above discussion has magnitude at most $2^{-n^{c'}}$, for any $c'$ of our choosing. Rewriting $p(M) - M^{E}$ in the form $2^{-n^{c'}}U^{-1}2^{n^{c'}}(p(D)-D^{E})U$ and applying Lemma \ref{eigdecompbnd}, it follows that (for sufficiently large $c'$) each entry of $p(M) - M^{E}$ also has magnitude $2^{-{\mathrm{poly}}(n)}$.
It therefore suffices to compute $p(M)$. Since we can compute the coefficients of the polynomial $p$ in polylogarithmic space and since we can compute $M^{k}$ for any $k \leq n$ in polylogarithmic space via repeated squaring, we can compute $p(M)$ in polylogarithmic space, as desired.
\end{proof}
\section{Computing invariant measures in small space}\label{sect:ubnd}
In this section we prove Theorem \ref{ubnd}.
This theorem can be seen as a refinement of Theorem C in \cite{BGR}. Our strategy, therefore, will be mainly to adapt the algorithm described in the proof of Theorem C, taking care to implement each step in polylogarithmic space.
For completeness, we will first describe the algorithm presented in \cite{BGR}. We defer the analysis of this algorithm to the original paper.
Recall that Theorem C states
\begin{theorem}
Let $S_{\epsilon}$ be a computable dynamical system defined by a continuous function $f$ from a compact space $M$ to itself and a Gaussian noise kernel $p_{f(x)}^{\epsilon}(\cdot)$. Assume also that $f$ is polynomial-time integrable (i.e. it is possible to integrate the convolution of powers of $f$ with polynomial functions in polynomial time). Then computing $\mu$ to within precision $\delta < O(\epsilon)$ requires time and space $O_{S,\epsilon}({\mathrm{poly}}(\log(1/\delta)))$.
\end{theorem}
The algorithm used in the proof of Theorem C proceeds as follows.
\begin{enumerate}
\item
Begin by partitioning $M$ into $A$ regions $\mathfrak{a}_i$ each with diameter at most $\epsilon$. Assign each atom a center $x_i \in \mathfrak{a}_i$.
\item
Let $\mu^{(t)}(x)$ be the probability density function of the system at time $t$ (given some arbitrary initial distribution $\mu^{(0)}(x)$). Then on each of the regions $\mathfrak{a}_{i}$, $\mu^{(t)}(x)$ can be written as a Taylor series in $(x-x_i)$. In particular, we have that
\begin{equation*}
\mu^{(t)}(x) = \sum_{i=1}^{A}\mathbf{1}\{x \in \mathfrak{a}_i\} \sum_{k=0}^{\infty} \rho_{i,k}^{(t)}(x-x_i)^k
\end{equation*}
\noindent
where $\rho_{i,k} \in \mathbb{R}$ are the coefficients of these Taylor series. The coefficients at time $t+1$ are related to the coefficients at time $t$ via the following linear map.
\begin{equation*}
\rho_{i,l}^{(t+1)} = \sum_{j, m}\rho_{j, m}^{(t)}\int_{\mathfrak{a}_j}(y-x_j)^{m}\frac{\partial^{l}_2 K_{f}(y, x_i)}{l!}dy
\end{equation*}
Call this linear map $P$. The coefficients of $P$ can then be computed to arbitrary precision by computing convolutions of derivatives of the noise kernel with certain polynomials (which is possible in polynomial time by our assumption).
\item
For any positive integer $N$, ignoring all terms in the Taylor expansion of degree larger than $N$ truncates the transition map $P$ to form a finite linear map $P_{N}$ (representable as an $AN$ by $AN$ matrix). The analysis in \cite{BGR} proves the following lemma.
\begin{lemma}
There exist log-space computable functions $t(\delta)$ and $N(\delta)$ such that
\begin{equation*}
|| \pi - P^{t(\delta)}_{N(\delta)}\rho||_{\infty} \leq \delta
\end{equation*}
\noindent
for all $\delta > 0$, uniformly in $\rho$, where
\begin{eqnarray*}
t(\delta) &=& O\left(\log \delta^{-1}\exp\left(\epsilon^{-2}\right)\right) \\
N(\delta) &=& O\left(\log \delta^{-1}\,{\mathrm{poly}}\left(\epsilon^{-1}\right) \right)
\end{eqnarray*}
\end{lemma}
\begin{proof}
See Theorem 36 in \cite{BGR}. Explicit expressions for $t(\delta)$ and $N(\delta)$ can be found in the proof of Theorem 36.
\end{proof}
By repeated squaring, we can compute $P_{N(\delta)}^{t(\delta)}$ in time $O({\mathrm{poly}}(N(\delta))\log t(\delta)) = O_{\epsilon}({\mathrm{poly}}(\log \delta^{-1}))$. The above lemma implies that the measure given by $P_{N(\delta)}^{t(\delta)}$ is within $\delta$ of the invariant measure $\mu$, as desired.
\end{enumerate}
We now proceed to prove Theorem \ref{ubnd}. As in Theorem C in \cite{BGR}, we initially restrict ourselves to the one-dimensional case for clarity. We later describe the changes necessary for the $d$-dimensional case.
\newtheorem*{thm:ubnd}{\bf Theorem \ref{ubnd}}
\begin{thm:ubnd} {\em
Let $X=[0,1]$.
If the noise $p_{f(x)}^{\epsilon}(\cdot)$ is Gaussian, and $f$ is $(\log \frac{1}{\epsilon})+$log-space integrable, then the computation of the invariant measure $\mu$ at precision $\delta$ can be done in space $O\left({\mathrm{poly}}\left(\log\frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right)\right)$.}
\end{thm:ubnd}
\begin{proof}
We describe how to adapt the algorithm presented above so that it can be performed in poly-logarithmic space.
In order to show we can execute the above approach in polylogarithmic space, we must show we can both compute the coefficients of the matrix $P$ to within ${\mathrm{poly}}(\delta)$ accuracy and that we can then subsequently exponentiate the truncated matrix $P_{N(\delta)}$ to the power $t(\delta)$. Note that the coefficients of $P$ are given by the expression
\begin{equation*}
P^{(i,j)}(l, m) = \int_{\mathfrak{a}_j}(y-x_j)^m\frac{\partial^{l}_2 K_{f}(y, x_i)}{l!}dy
\end{equation*}
\noindent
In the case of a Gaussian kernel,
\begin{equation*}
K_{f}(y, x_i) = C_{\epsilon}(x_i)\frac{1}{\epsilon\sqrt{2\pi}}\exp\left(-(f(y)-x_i)^2/2\epsilon^2\right)
\end{equation*}
We can expand this expression out via the Taylor series for $\exp(x)$. Since $(f(y) - x_i)$ is bounded (by the diameter of $M$, for example), to approximate this integral to within $\delta$, it suffices to take the first ${\mathrm{poly}}\left(\frac{1}{\epsilon}+\log\frac{1}{\delta}\right)$ terms of this expansion. We can therefore approximate $P^{(i,j)}(l,m)$ as a linear combination of ${\mathrm{poly}}\left(\frac{1}{\epsilon}+\log\frac{1}{\delta}\right)$ terms of the form
\begin{equation*}
\int_{\mathfrak{a}_j}(y-x_j)^mf(y)^k dy
\end{equation*}
By our assumption, we can evaluate each of these integrals (to within precision ${\mathrm{poly}}(\delta)$) in space $O(\log\log \frac{1}{\delta})$. The coefficients of the linear combination can also each be computed in space complexity $O\left({\mathrm{poly}}\left(\log \frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right)\right)$ via the comments in Section \ref{sect:spacebound} (in particular, \ref{facts} and \ref{polys}), and hence the entire linear combination can be computed in this space complexity. The normalization constant $C_{\epsilon}(x_i)$ can similarly be computed in this space complexity by expanding out $\exp(y - x_i)^2$ as a Taylor series in $y$ and integrating over $[0,1]$.
Finally, we must compute $P_{N(\delta)}^{t(\delta)}$. Note that since $t(\delta)$ is exponential in $\epsilon^{-2}$, we cannot compute $P_{N(\delta)}^{t(\delta)}$ in space $O({\mathrm{poly}}(\log \log \delta^{-1} + \log\epsilon^{-1}))$ via repeated squaring, as in the proof of Theorem C. Instead, we apply the algorithm presented in Section \ref{sect:matpow}; by Theorem \ref{matpowers}, this allows us to compute $P_{N(\delta)}^{t(\delta)}$ to within precision $\delta$ in polylogarithmic space.
Since $P_{N(\delta)}^{t(\delta)}$ is an $AN(\delta)$ by $AN(\delta)$ matrix, where $A = O({\mathrm{poly}}(1/\epsilon))$ and $N(\delta) = O({\mathrm{poly}}(\log(1/\delta) + 1/\epsilon))$, it follows that computing $P_{N(\delta)}^{t(\delta)}$ can be done in total space complexity
$$O({\mathrm{poly}}(\log AN(\delta))) = O\left({\mathrm{poly}}\left(\log \frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right)\right).$$
\end{proof}
\begin{remark}\label{rem:dimension}
To extend this result to the case of $d$ dimensions, we can follow essentially the same procedure; the only change is that we now must write the density functions $\mu^{(t)}(x)$ as multivariate Taylor series in $(\mathbf{x} - \mathbf{x_i})$ (and each of the components of $f$ must be $(\log \epsilon^{-1}) + \log$-space integrable). Since the number of terms in the multivariate Taylor expansion of degree at most $N$ is $O(N^{d})$, the truncated matrix $P_{N}$ still has size polynomial in $\frac{1}{\epsilon}$ and $\log\frac{1}{\delta}$, so the invariant measure can be computed in space complexity
\begin{equation}
{\mathrm{poly}}\left(d + \log \frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right).
\end{equation}
\end{remark}
\subsection{Computing Taylor coefficients of $f$}
Theorem \ref{ubnd} relies on the assumption that convolutions of powers of $f$ with polynomials are log-space integrable. While this assumption holds true for many natural choices of $f$, it is perhaps not the easiest condition to work with, and one might hope for a more natural constraint on $f$. In this section, we show an alternate constraint which implies our previous assumption; namely, that $f$ is log-space computable, smooth, and has bounded Taylor coefficients. Recall that $f$ is logspace computable
if given $x$ on the input tape, $f(x)$ can be computed within precision $2^{-n}$ using space $O(\log n)$.
We prove the following theorem\footnote{A similar theorem holds under the assumption that $f$ is computable in polylogarithmic space. The conclusion is then that the integrals are also computable in polylogarithmic space --- which suffices to obtain the conclusion of Theorem~\ref{ubnd}.}.
\begin{theorem}\label{analytic}
Let $f$ be a function that is log-space computable, smooth, and for some constant $\eta$, satisfies (for all $x$)
\begin{equation*}
|\partial^{k}f(x)| \leq k!\eta^{k}
\end{equation*}
\noindent
Then it is possible to compute integrals of the form
\begin{equation*}
\int_{\mathfrak{a}_j} (y-x_{j})^mf(y)^k dy
\end{equation*}
\noindent
where ${\mathrm{diam}\,} \mathfrak{a}_j < \frac{1}{2\eta}$ to within precision $\delta$ in space logarithmic in $m$, $k$, $\log \eta$ and $\log 1/\delta$.
\end{theorem}
\begin{remark}
We note that if $f$ is analytic in $[a,b]$, then such a constant $\eta$ always exists. In fact, if we let $\rho$ to be a strict lower bound of the set of all the radii of convergence of the Taylor series with centers in $[a,b]$ (note that $\rho>0$ by compactness), then the integral Cauchy formula implies that, for any $x\in[a,b]$
\begin{equation*}
|\partial^{k}f(x)| \leq \frac{Mk!}{\rho^{k}}
\end{equation*}
where $M$ is any upper bound of $f$ over $[a,b]^{\rho}=\{z\in\mathbb{C}: |z-x|\leq\rho \text{ for some }x\in [a,b]\}$.
\end{remark}
To prove the above theorem, we first show that if $f$ satisfies the above constraints, then it is possible to compute its Taylor coefficients in logarithmic space.
\begin{lemma} \label{taylor}
Assume $f$ is log-space computable, smooth, and satisfies $|\partial^{k}f(x)| \leq k!\eta^{k}$ for all $x$ in the domain. Then for any $x_c$, we can write
\begin{equation*}
f(x) = \sum_{k} a_{k}(x-x_c)^k
\end{equation*}
\noindent
The value of $a_{k}$ is then computable to within precision $\delta$ in space logarithmic in $k$, $\log 1/\eta$, and $\log 1/\delta$.
\end{lemma}
\begin{proof}
We claim that if we choose $\tau = \delta \eta^{-(k+1)}k^{-(k+2)}2^{-k}$, then
\begin{equation}\label{coeffbound}
\left| \dfrac{\sum_{i=0}^{k}f(x+i\tau)(-1)^{k-i}\binom{k}{i}}{k!\tau^k} - a_k\right| \leq \delta
\end{equation}
Note that since $f$ is computable in log-space, the quantity on the LHS of equation \ref{coeffbound} is computable in space $O(\log k + \log\log 1/\tau) = O(\log k + \log\log \eta + \log\log 1/\delta)$, as desired.
To prove equation \ref{coeffbound}, recall that the Lagrange remainder theorem for Taylor series says that for any $x$ (within the radius of convergence of the Taylor series about $x_c$), we can write
\begin{equation*}
f(x) = \left(\sum_{i=0}^{k} a_{i}(x-x_c)^i\right) + \dfrac{f^{(k+1)}(\xi)}{(k+1)!}(x-x_c)^{k+1}
\end{equation*}
\noindent
for some $\xi$ between $x_c$ and $x$. Write $f(x) = \left(\sum_{i=0}^{k} a_{i}(x-x_c)^i\right) + R_{k+1}(x)$. By our constraint on $f$, we know that $\left|\frac{f^{(k+1)}(\xi)}{(k+1)!}\right| \leq \eta^{k+1}$, so we can rewrite this as
\begin{equation*}
\left| R_{k+1}(x) \right| \leq \eta^{k+1}(x-x_c)^{k+1}
\end{equation*}
Next, recall the following binomial identities. For all $r < k$, we have that
\begin{equation*}
\sum_{i=0}^{k}i^{r}(-1)^{k-i}\binom{k}{i} = 0
\end{equation*}
\noindent
On the other hand, when $r=k$, we have that
\begin{equation*}
\sum_{i=0}^{k}i^{k}(-1)^{k-i}\binom{k}{i} = k!
\end{equation*}
Substituting in the Taylor expansion for $f$ and applying the above binomial identities, we see that
\begin{eqnarray*}
\left|\dfrac{\sum_{i=0}^{k}f(x+i\tau)(-1)^{k-i}\binom{k}{i}}{k!\tau^k} - a_k\right| &=& \left|\dfrac{\sum_{i=0}^{k}R_{k+1}(x+i\tau)(-1)^{k-i}\binom{k}{i}}{k!\tau^k} \right| \\
&\leq & \dfrac{1}{k!\tau^k}\sum_{i=0}^{k}\left|\eta^{k+1}i^{k+1}\tau^{k+1}\binom{k}{i}\right| \\
&\leq & \dfrac{\tau \eta^{k+1}}{k!}\sum_{i=0}^{k}k^{k+1}2^{k} \\
&\leq & \dfrac{\tau \eta^{k+1}k^{k+2}2^{k}}{k!} \\
&\leq & \delta
\end{eqnarray*}
\noindent
as desired.
\end{proof}
We can now prove Theorem \ref{analytic}.
\begin{proof}[Proof of Theorem~\ref{analytic}]
We wish to compute the integral
\begin{equation*}
\int_{\mathfrak{a}_j}(y-x_j)^{m}f(y)^k dy
\end{equation*}
\noindent where we know that ${\mathrm{diam}\,} \mathfrak{a}_j < \frac{1}{2\eta}$. Write $f(y) = \sum a_i(y-x_j)^i$; by our assumption $|a_i| \leq \eta^{i}$ for all $i$.
Let $f_{M}(y) = \sum_{i=0}^{M} a_i(y-x_j)^i$. Then we have that
\begin{eqnarray*}
|f(y) - f_M(y)| &=& \left |\sum_{i=M+1}^{\infty} a_i(y-x_j)^{i}\right|\\
&\leq & \sum_{i=M+1}^{\infty}|a_i|\cdot |y-x_j|^{i} \\
&\leq & \sum_{i=M+1} \eta^{i}(2\eta)^{-i} \\
&=& \sum_{i=M+1} 2^{-i} \\
&=& 2^{-M}
\end{eqnarray*}
Since $f(y) \in [0,1]$, this further implies that $|f(y)^k - f_{M}(y)^k| \leq k2^{-M}$; it follows that if we take $M = \log(\delta/k)$, then $|f(y)^{k} -f_{M}(y)^{k}| \leq \delta$, and in particular
\begin{equation*}
\left| \int_{\mathfrak{a}_j}(y-x_j)^{m}f(y)^k dy - \int_{\mathfrak{a}_j}(y-x_j)^{m}f_M(y)^k dy\right| \leq \delta
\end{equation*}
But note that by Lemma \ref{taylor}, we can compute each of the coefficients of $f_{M}(y)$ (to within precision ${\mathrm{poly}}(\delta)$) in space logarithmic in $M$, $\log 1/\eta$, and $\log 1/\delta$. We can then compute the coefficients of $(y-x_j)^mf_{M}(y)^k$ via remark \ref{polys} of Section \ref{sect:spacebound}, and hence compute the integral over $\mathfrak{a_j}$ in space $O(\log k + \log m + \log \log \eta + \log\log 1/\delta)$, as desired.
\end{proof}
Theorem \ref{ubndalt} now follows as a straightforward corollary to Theorem \ref{analytic}.
\newtheorem*{thm:ubndalt}{\bf Theorem \ref{ubndalt}}
\begin{thm:ubndalt} {\em
Let $X=[0,1]$.
If the noise $p_{f(x)}^{\epsilon}(\cdot)$ is Gaussian, and $f$ is log-space computable, smooth, and (for some $\eta>0$) satisfies $|\partial^{k}f(x)| \leq k!\eta^{k}$ for all $x$, then the computation of the invariant measure $\mu$ at precision $\delta$ can be done in space $O\left({\mathrm{poly}}\left(\log \eta + \log\frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right)\right)$. }
\end{thm:ubndalt}
\begin{proof}
In the proof of Theorem \ref{ubnd}, we make the slight modification that instead of simply picking the regions $\mathfrak{a}_i$ to satisfy ${\mathrm{diam}\,} \mathfrak{a}_i \leq \epsilon$, we instead make them satisfy the stronger requirement that ${\mathrm{diam}\,} \mathfrak{a}_i \leq \min(\epsilon, 1/(2\eta))$. Then, by Theorem \ref{analytic}, we can compute all the necessary integrals in logarithmic space, as before by the earlier assumption.
Since the number of regions is linear in $\eta$, the resulting space bound is $O({\mathrm{poly}}(\log\eta + \log \frac{1}{\epsilon} + \log\log\frac{1}{\delta}))$, as desired.
\end{proof}
\section{Space lower bound for computing invariant measures}\label{sect:lbnd}
In this section we prove Theorem \ref{lbnd}. We begin by proving a weaker version of Theorem \ref{lbnd} where we don't restrict our constructed function $f$ to be analytic (or even continuous).
\begin{lemma}\label{lbndpre}
Any algorithm that can compute the invariant measure $\mu$ of a dynamical system to within precision $\delta$ with Gaussian noise kernel $p_{f(x)}^{\epsilon}(\cdot)$ requires space at least $\Omega\left(\log\frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right)$.
\end{lemma}
\begin{proof}
Since our output is of size $\log\frac{1}{\delta}$, it requires $\Omega\left(\log\log\frac{1}{\delta}\right)$ space to simply keep track of which bit we are currently outputting. This immediately shows the $\Omega\left(\log\log\frac{1}{\delta}\right)$ part of the lower bound.
It remains to show the $\Omega\left(\log\frac{1}{\epsilon}\right)$ portion of the lower bound. We will present a $SPACE(\log M)$-reduction from $SPACE(M)$ to the problem of computing the invariant measure of a noisy dynamical system $S_{\epsilon}$ with $\epsilon = 2^{-\Theta(M)}$, thus showing computing the invariant measure of a noisy dynamical system requires space at least $\Omega(\log\frac{1}{\epsilon})$.
More specifically, we will show how to convert any Turing machine $T$ with tape size $M$ along with an input $s$ into a function $f:X\rightarrow X$ that `embeds' this machine/input pair. We will construct this embedding so that the invariant measure of the corresponding dynamical system will have significant measure on some subset of the domain $X$ if $T$ accepts $s$ and close to zero measure otherwise.
Let $S$ be the total number of states of the Turing machine $T$ (including the current state of the tape, so $S = \Theta(2^{M})$), and let $N = 2S^2$. Choose $X$ to be the unit interval $[0, 1]$, and partition $X$ into the $N$ intervals $X_{k} = [\frac{k}{N}, \frac{k+1}{N}]$ for $0 \leq k < N$. Let $c_k = \frac{2k+1}{2N}$ be the center of interval $X_k$.
Choose $\epsilon$ (the size of the Gaussian noise) so that $\int_{-1/2N}^{1/2N}p_{\epsilon}(x)dx = 1-N^{-100}$; since the tail of a Gaussian decreases to $0$ exponentially quickly, it suffices to take $\epsilon = \Omega(N^{-2}) = 2^{-O(M)}$ (then this integral corresponds to the probability of being at least $\Omega(N)$ standard deviations away from the mean).
Finally, if $x\in X_{k}$, then we define $f$ so that $f(x) = c_{\suc(k)}$, where $\suc(k):\{0, \dots, N-1\} \rightarrow \{0, \dots, N-1\}$ is defined as follows.
\begin{enumerate}[(i)]
\item
If $k < S^2$, set $(v, t) = \left(\lfloor\frac{k}{S}\rfloor, k - S\lfloor\frac{k}{S}\rfloor\right)$. We will interpret $v$ as the binary representation of some state of $T$, and $t$ as a counter of how many steps we have run machine $T$ for so far.
\begin{enumerate}
\item
If $v$ is an accepting state, set $\suc(k) = S^2$.
\item
If $v$ is a rejecting state, set $\suc(k) = sS$, where $s$ is the initial state of the Turing machine $T$.
\item
If $t<S-1$ and $v$ is neither an accepting or a rejecting state, find the successor state $v'$ of $v$ according to the Turing machine $T$ (note that since computation is local, this can be done in space $O(\log M)$), and set $\suc(k) = v'S + (t+1)$.
\item
If $t=S-1$, set $\suc(k) = sS$, where $s$ is the initial state of the Turing machine $T$.
\end{enumerate}
\item
If $S^2 \leq k < 2S^2 - 1$, then $\suc(k) = k+1$.
\item
If $k = 2S^2-1$, then $\suc(k) = sS$, where $s$ is the initial state of $T$.
\end{enumerate}
Intuitively, this function $f$ simulates the Turing machine $T$ for up to $S$ time steps (the maximum amount of time a Turing machine with $S$ states can take to reach an accepting state). If, within these $S$ time steps, we encounter an accepting state, we go on a walk for another $S$ time steps through $[1/2, 1]$ and then return to the initial state; otherwise, if we encounter a rejecting state (or run for $S$ steps without accepting or rejecting), we immediately return to the initial state. In this way, if $s$ is an accepting initial state, the invariant measure will have approximately half their weight on the interval $[1/2, 1]$, and if $s$ is not an accepting initial state, the invariant measure will have approximately no weight on $[1/2, 1]$. We formalize this intuition below.
Let $\mu$ be the invariant measure of this dynamical system perturbed by Gaussian noise of variance $\epsilon^2$ with $\epsilon$ as chosen above (note that since the noise is Gaussian, there must be a unique invariant measure; this follows from the fact that for any set $U$ of positive measure, the probability $x_{t+1} \in U$ given $x_{t}$ is always strictly positive). We claim that if $T$ eventually accepts on $s$, then $\mu$ will have measure at least $1/3$ on $[1/2, 1]$. Otherwise, $\mu$ will have measure approximately $0$ on $[1/2, 1]$.
Let $\mathcal{S} = \{sS, \suc(sS), \suc(\suc(sS)), \dots \}$ be the set of iterates of the initial state $s$ of our Turing machine under this successor function. Note that if $T$ accepts starting on $s$, then $\{S^2, \dots, 2S^2-1\}$ is a subset of $\mathcal{S}$; otherwise, if it rejects or fails to halt, then $\{S^2, \dots, 2S^2-1\}$ is not a subset of $\mathcal{S}$.
We first claim that the weight of the invariant measure $\mu$ over states in $\mathcal{S}$ is at least $1-N^{-99}$. To see this, let $x_1, x_2, \dots$ be a sequence of iterates of our dynamical system. Call a time $t$ \textit{bad} if $x_{t} \in X_{k}$ but $x_{t+1} \not\in X_{\suc(k)}$. By our choice of $\epsilon$, the probability of any given time $t$ being bad is at most $N^{-100}$ and is independent of all other times being bad. In addition, by our construction, after $N$ noise-free steps we are guaranteed to be in $\mathcal{S}$, since after $N$ steps of $\suc(k)$ we must pass through $sS$. If we let $X_{\mathcal{S}} = \cup_{k\in \mathcal{S}} X_k$, it then follows that the probability that $x_{t} \in X_{\mathcal{S}}$ is at least $(1-N^{-100})^{N} \geq 1-N^{-99}$.
Next, assume that $T$ accepts on $s$, and let $X_{path} = \cup_{k=S^2}^{2S^2-1}X_{k} = [1/2, 1]$; note that $X_{path}$ is a subset of $X_{\mathcal{S}}$. We claim that the weight under the measure $\mu$ of $X_{path}$ is at least $\frac{1}{2}(1-2S^{-9})$ of the weight of $X_{\mathcal{S}}$. To see this, call the sequence $x_{t}, x_{t+1}, \dots, x_{t+|\mathcal{S}|}$ \textit{good} if no time $t+i$ is bad for any $0\leq i < |\mathcal{S}|$ (in other words, no low probability noise events occur for $|\mathcal{S}|$ steps). Note that this occurs with probability at least $(1-N^{-100})^{N} \geq 1-N^{-99}$. But in any good sequence, each element of $\mathcal{S}$ appears exactly once; it follows that, asymptotically, the probability that $x_{t}$ belongs to $X_{path}$ given that $x_{t}$ belongs to $X_{\mathcal{S}}$ is at least
\begin{equation*}
(1-N^{-99})\frac{S^2}{|\mathcal{S}|} \geq (1-N^{-99})\frac{S^2}{2S^2} = \frac{1}{2}(1-N^{-99})
\end{equation*}
Combining these two results, it follows that the weight of the invariant measure over $X_{path}$ is at least
\begin{equation*}
\frac{1}{2}(1-S^{-99})^2 > \frac{1}{3}
\end{equation*}
\noindent
On the other hand, if $T$ does not accept on $s$, then $[1/2, 1] \cap X_{\mathcal{S}} = \emptyset$, and therefore the weight of $\mu$ over $[1/2, 1]$ is at most $N^{-99} \ll 1/3$, as desired.
\end{proof}
Note that, since the function constructed in this reduction is piecewise linear with $O(2^M)$ pieces, it is in fact $(\log \epsilon^{-1}) + \log$-space integrable in the sense of Theorem \ref{ubnd}. On the other hand, this function is not continuous (let alone analytic), and hence does not satisfy the conditions of Theorem \ref{ubndalt}.
To prove Theorem \ref{lbnd}, we transform the above example into a uniformly analytic function by replacing each of the intervals in the construction in Lemma \ref{lbndpre} with an analytic approximation to a step function. We describe this below, starting with the construction of our analytic `step function'.
\begin{lemma}\label{step}
For any $\alpha, \beta > 0$, there exists an analytic function $F(x):\mathbb{R}\rightarrow\mathbb{R}$ that satisfies the following constraints:
\begin{itemize}
\item For all $x < -\alpha$, $|F(x)| < \beta$.
\item For all $x > \alpha$, $|F(x)-1| < \beta$.
\item For all integer $k \geq 0$ and all $x$, $|\partial^{k}F(x)| \leq k!\eta^{k}$ for some $\eta = O(\alpha^{-1}\log\beta^{-1})$.
\item The function $F(x)$ is computable to within precision $\delta$ in space $O(\log\log \delta^{-1})$.
\end{itemize}
\end{lemma}
\begin{proof}
We will consider functions of the form
\begin{equation}
F(x) = \frac{1}{1+e^{-Cx}}
\end{equation}
\noindent
where $C$ is a positive integer. Note that in order for $|F(x)-1|$ to be less than $\beta$ for all $x>\alpha$, we must have
\begin{equation*}
\left|\frac{1}{1+e^{-C\alpha}} -1\right| < \beta
\end{equation*}
\noindent
which is satisfied when
\begin{equation*}
C > \alpha^{-1}\log\frac{1-\beta}{\beta}
\end{equation*}
Likewise, in order for $|F(x)|$ to be less than $\beta$ when $x<-\alpha$, we must have
\begin{equation*}
\left|\frac{1}{1 + e^{C\alpha}}\right| < \beta
\end{equation*}
\noindent
which is satisfied when
\begin{equation*}
C > \alpha^{-1}\log\frac{1-\beta}{\beta}
\end{equation*}
\noindent
Therefore to satisfy the first two requirements, we can take
\begin{equation*}
C = \left\lceil \alpha^{-1}\log\frac{1-\beta}{\beta} \right\rceil \approx \alpha^{-1}\log\beta^{-1}
\end{equation*}
To prove the third requirement, note that we can write
\begin{equation*}
F(x) = \frac{1}{2}\left(1 + \tanh\left(\frac{Cx}{2}\right)\right)
\end{equation*}
\noindent
By \cite{AS65}, it is known that (for $x\geq 0$),
\begin{eqnarray*}
\left|\dfrac{d^k\tanh(x)}{dx^k}\right| &=& \frac{2^{k+1}e^{2x}}{(1+e^{2x})^{k+1}}\left|\sum_{j=0}^{k-1}\left\langle {k \atop j} \right\rangle (-1)^{j} e^{2jx}\right| \\
&\leq & \frac{2^{k+1}e^{2(k+1)x}}{(1+e^{2x})^{k+1}}\sum_{j=0}^{k-1}\left\langle {k \atop j} \right\rangle \\
&=& 2^{k+1}\left(\frac{e^{2x}}{1+e^{2x}}\right)^{k+1} k! \\
&\leq & 2^{k+1} k!
\end{eqnarray*}
\noindent
where $\left\langle {n \atop i} \right\rangle$ are Eulerian numbers of the second kind (in the third line we use the fact that $\sum_{i} \left\langle {n \atop i} \right\rangle = n!$). Since $\tanh(x)$ is an odd function, the same bound holds for $x \leq 0$. It follows that for all $k > 0$ and all $x$,
\begin{equation*}
|\partial^{k}F(x)| \leq C^{k} k!
\end{equation*}
\noindent
and therefore we can take $\eta = C$ (for $k=0$, it suffices to note that $|F(x)| \leq 1$ for all $x$).
Finally, since we can compute $e^{x}$ to within precision $\delta$ in space $O(\log\log \delta^{-1})$ via Lemma \ref{compexp}, and since we can perform all arithmetic operations to within precision $\delta$ in space $O(\log\log\delta^{-1})$ via the remarks in Section \ref{sect:spacebound}, it is possible to compute $F(x)$ in space $O(\log\log\delta^{-1})$.
\end{proof}
We now proceed to prove Theorem \ref{lbnd}.
\newtheorem*{thm:lbnd}{\bf Theorem \ref{lbnd}}
\begin{thm:lbnd}{\em
Any algorithm that can compute the invariant measure $\mu$ to within precision $\delta$ of a dynamical system with Gaussian noise kernel $p_{f(x)}^{\epsilon}(\cdot)$ and analytic transition function $f(x)$ (that uniformly satisfies $|\partial^{k}f(x)| \leq k!\eta^k$ for some $\eta = {\mathrm{poly}}(\epsilon^{-1})$) requires space at least $\Omega\left(\log\frac{1}{\epsilon} + \log\log\frac{1}{\delta}\right)$.}
\end{thm:lbnd}
\begin{proof}
We will use the function $F(x)$ defined in Lemma \ref{step} to approximate the function $f(x)$ defined in the proof of Lemma \ref{lbndpre} with an analytic function. We will then show that the dynamical system corresponding to this new $f$ still has the property that it has significant measure on the interval $[1/2, 1]$ if and only if the Turing machine $T$ accepts $s$.
As before, let $S = 2^{M}$ be the number of states of the Turing machine $T$, and let $N = 2S^2$. Partition the interval $[0,1]$ into the $N$ intervals $X_{k} = [\frac{k}{N}, \frac{k+1}{N}]$ for $0 \leq k < N$, and let $c_k = \frac{2k+1}{2N}$ be the center of interval $X_k$. Let $\suc(k)$ be defined equivalently as in the proof of Theorem \ref{lbndpre}. Then, in Lemma \ref{step}, set $\alpha = \beta = S^{-100}$, and consider the function
\begin{equation}
f(x) = c_{\suc(0)} + \sum_{i=1}^{N} \left(c_{\suc(i)} - c_{\suc(i-1)}\right)F\left(x-\frac{i}{N}\right)
\end{equation}
Note that by Lemma \ref{step}, this function $f$ satisfies the following condition: if $|x - c_k| \leq \frac{1}{2N} - \alpha$, then $|f(x) - c_{\suc(k)}| \leq N\beta = O(S^{-98})$. We will next claim that if we set $\epsilon = S^{-10}$, then we simultaneously have that
\begin{equation}\label{eq:pbdtogd}
\max_{x} p_{\epsilon}(x) \leq \frac{S^{10}}{\sqrt{2\pi}}
\end{equation}
\noindent
and that
\begin{equation}\label{eq:pgdtobd}
\int_{-(\frac{1}{2N} - \alpha - N\beta)}^{\frac{1}{2N} - \alpha - N\beta}p_{\epsilon}(x) dx \geq 1-16 S^{-16}
\end{equation}
To show the first of these inequalities, note simply that $p_{\epsilon}(x) \leq \frac{1}{\epsilon\sqrt{2\pi}}$; inequality \ref{eq:pbdtogd} then follows from substituting $\epsilon = S^{-10}$. To show the second inequality, note first that $\frac{1}{2N} - \alpha - N\beta \geq \frac{1}{4N}$. Hence the integral in inequality \ref{eq:pgdtobd} is at most the probability that the noise is within $\frac{1}{4N\epsilon} = \frac{S^{8}}{4}$ standard deviations of its mean. By Chebyshev's inequality it follows that this probability is at most $1 - 16S^{-16}$, from which this second inequality follows (much better bounds are in fact possible).
We can now proceed to analyze the invariant measure $\mu$ of this dynamical system. For each $k$, let $Y_{k} = \left[c_k - \frac{1}{2N} + \alpha, c_k + \frac{1}{2N} - \alpha\right]$, and let $Y = \cup_{k=0}^{N-1}Y_{k}$. We will first show that $\mu$ has measure at least $1-32S^{-16}$ on $Y$.
Let $x_1, x_2, \dots$ be a sequence of iterates of this dynamical system. Call a time $t$ \textit{bad} if $x_{t} \in Y_{k}$ but $x_{t+1} \not\in Y_{\suc(k)}$. By inequality \ref{eq:pbdtogd}, the probability that a time $t$ is bad (given that $x_{t} \in Y_k$ for some $k$) is at most $16S^{-16}$. It follows that $\mathrm{Pr}[x_{t+1} \not\in Y| x_{t}\in Y] \leq 16S^{-16}$. On the other hand, note that if $x_{t} \not\in Y$, then by inequality \ref{eq:pgdtobd}, the probability $x_{t+1}$ is in $Y$ is at least
\begin{eqnarray*}
1 - \left(\max_{x} p_{\epsilon}(x)\right) |X \setminus Y| &\geq & 1 - \frac{S^{10}}{\sqrt{2\pi}}(N\alpha) \\
& \geq & 1 - \sqrt{\frac{2}{\pi}}S^{-88} \\
& \geq & \frac{1}{2}
\end{eqnarray*}
\noindent
It follows that the weight of $\mu$ over $Y$ must be at least $0.5/(0.5+16S^{-16}) \geq 1-32S^{-16}$, as desired.
Next, as before, let $\mathcal{S} = \{sS, \suc(sS), \suc(\suc(sS)), \dots \}$ be the set of iterates of the initial state $s$ of our Turing machine. If $T$ accepts starting on $s$, then $\{S^2, \dots, 2S^2-1\}$ is a subset of $\mathcal{S}$; otherwise, if it rejects or fails to halt, then $\{S^2, \dots, 2S^2-1\}$ is not a subset of $\mathcal{S}$. Let $Y_{\mathcal{S}} = \cup_{k\in \mathcal{S}} Y_k$. We will next show that the weight of $\mu$ over $Y_{\mathcal{S}}$ is at least $1 - 64S^{-14}$.
To prove this, recall that if we start at some $x \in Y$, after $N$ noise-free steps, we are guaranteed to be in $Y_{\mathcal{S}}$. Since the weight of $\mu$ over $Y$ is at least $1-32S^{-16}$ and since the probability a string of $N$ steps are all good is at least $(1-16S^{-16})^{N} \geq 1 - 32S^{-14}$, the weight of $\mu$ over $Y_{\mathcal{S}}$ is at least $(1-32S^{-16})(1-32S^{-14}) \geq 1-64S^{-14}$.
Finally, assume that $T$ accepts on $s$, and let $Y_{path} = \cup_{k=S^2}^{2S^2-1}Y_{k} = [1/2, 1]$; note that $Y_{path}$ is a subset of $Y_{\mathcal{S}}$. We claim that the weight under the measure $\mu$ of $Y_{path}$ is at least $\frac{1}{2}(1-32S^{-14})$ of the weight of $Y_{\mathcal{S}}$. To see this, call the sequence $x_{t}, x_{t+1}, \dots, x_{t+|\mathcal{S}|}$ \textit{good} if no time $t+i$ is bad for any $0\leq i < |\mathcal{S}|$. Note that this occurs with probability at least $(1-16S^{-16})^{N} \geq 1-32S^{-14}$. But in any good sequence, each element of $\mathcal{S}$ appears exactly once; it follows that, asymptotically, the probability that $x_{t}$ belongs to $X_{path}$ given that $x_{t}$ belongs to $X_{\mathcal{S}}$ is at least
\begin{equation*}
(1-32S^{-14})\frac{S^2}{|\mathcal{S}|} \geq (1-32S^{-14})\frac{S^2}{2S^2} = \frac{1}{2}(1-32S^{-14})
\end{equation*}
Combining these two results, it follows that the weight of the invariant measure over $Y_{path}$ (and hence $[1/2, 1]$) is at least
\begin{equation*}
\frac{1}{2}(1-32S^{-14})(1-64S^{-14}) > \frac{1}{3}
\end{equation*}
\noindent
On the other hand, if $T$ does not accept on $s$, then $[1/2, 1] \cap Y_{\mathcal{S}} = \emptyset$, and therefore the weight of $\mu$ over $[1/2, 1]$ is at most $4S^{-14} \ll 1/3$, as desired.
\end{proof}
\bibliographystyle{alpha}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,489
|
{"url":"https:\/\/mathoverflow.net\/questions\/44528\/a-simple-decomposition-for-fractional-brownian-motion-with-parameter-h1-2","text":"# A simple decomposition for fractional Brownian motion with parameter $H<1\/2$\n\n## Background\n\nLet $X = \\{X(t):t \\geq 0\\}$ be a (standard, real-valued) fractional Brownian motion (fBm) with parameter $H \\in (0,1)$, i.e., a continuous centered Gaussian process with covariance function given, for $0 \\leq s \\leq t$, by $$C_X {(s,t)} := {\\rm E}[X(s)X(t)] = \\frac{1}{2}[t^{2H} + s^{2H} - (t - s)^{2H} ].$$ Writing $C_X {(s,t)}$ as $C_X {(s,t)} = \\frac{1}{2}[t^{2H} - (t - s)^{2H} ] + \\frac{1}{2}s^{2H}$, gives rise to the decomposition of $X$ as $X = Y + Z$, where $Y$ is a centered Gaussian process with covariance function $C_Y {(s,t)} = \\frac{1}{2} [t^{2H} - (t - s)^{2H}]$, independent of a time-changed Brownian motion $Z$ (specifically, $Z(t)=W(t^{2H}\/2)$, where $W$ is a standard BM). However, in order for $C_Y$ to be a valid covariance function it must be nonnegative definite. As indicated by numerical results (and can probably be easily proved), this is not the case for $H>1\/2$. For $H<1\/2$, on the other hand, $C_Y$ is the covariance function of some interesting Gaussian process arising in the setting of Gaussian random fields. Since I plan to write a paper on this apparently new subject, I find it sensible not to give too much details here (maybe I'll add some details later on).\n\nNow to my questions. Have you encountered the aforementioned decomposition in the literature? (I haven't.) Does it correspond to some known (e.g., integral) representation of fBm? Can you think of some application of it? Finally, can you find a simple\/useful representation for the process $Y$ in that decomposition (simple\/useful compared to the fBm case)?\n\nSorry I don't have time to write a better answer. I would be willing to bet Nualart has thought about this problem at least and his answer could very well be encompassed in this paper: (In particular your problem might be a special case described in section 3)\n\nP. Lei and D. Nualart: A decomposition of the bifractional Brownian motion and some applications. Statistics and Probability Letters 79, 619-624, 2009.\n\nhttp:\/\/arxiv.org\/PS_cache\/arxiv\/pdf\/0803\/0803.2227v1.pdf\n\n\u2022 That reference is certainly interesting in our context (and I might even cite it). However, the decomposition described there for fBm (cf. Proposition 1) is not similar to the one I indicated above, and is apparently much more complicated. \u2013\u00a0Shai Covo Nov 7 '10 at 11:55\n\nInteresting, I don't think I've seen that before. But there is a similar sort of decomposition in W. Li and W. Linde 1998, however I don't think it's quite the same. Cheridito 2003 (Mixed-FBM) tackles a tangential but not altogether unrelated question. One last comment-- you should probably put max(s,t) in your covariance function since you are implicitly assuming t>s.\n\nIt looks like this idea isn't new. There is something very similar in an article by Alos, Mazet, and Nualart (SPA 2000).\n\nShai Covo, have you found any references to this covariance? Alos, Mazet and Nualart, SPA 2000 use a different decomposition of fBm. Have you published your result, and if so, could you give the reference? How does the process Y arise in relation to Gaussian random fields?\n\nI was thinking of a problem of existence of bifractional Brownian motion, and I proved that the function corresponding to your Y is nonnegative definite. I also noticed the decomposition of fBM. I searched whether this covariance appeared anywhere, and I could not find anything. My result is on arxivhttps:\/\/arxiv.org\/abs\/1902.09633. Recently I found your question.\n\n\u2022 Welcome to MathOverflow! We're a little different from traditional discussion fora; answers are expected to give a solution to the problem stated in the question. Please have a moment to read the tour. \u2013\u00a0Glorfindel Mar 21 '19 at 18:53","date":"2021-04-17 17:57:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8186147809028625, \"perplexity\": 377.6275514226894}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038461619.53\/warc\/CC-MAIN-20210417162353-20210417192353-00266.warc.gz\"}"}
| null | null |
Skakavac – wieś w Chorwacji, w żupanii karlowackiej, w mieście Karlovac. W 2011 roku liczyła 233 mieszkańców.
Przypisy
Miejscowości w żupanii karlowackiej
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,322
|
Where Brisbane is busy on Australia Day?
Every country has the one important day for his history. Australia Day is one of the biggest days in Aussie's lives. This is also the day when the first fleet arrived in Australia. But it does not stop there. Australia day is also a celebration of the multicultural land.
There is a public holiday on 26th January that lets you enjoy the day. National Public holidays are very important to let you forget about the worries of life. What an enthusiastic moment when a nation celebrates as a whole.
Fireworks are also one of the big presentations on Australia Day. Brisbane organizes different fireworks points at the night.
Fireworks are the main event these days to celebrate an event. Brisbane can never be behind anyone in this case. Australia Day is incomplete without the fireworks celebration. The event will start at 8 pm at River Quay Green.
There are also many other vantage points for the fireworks. Some are Bribie Island and Redcliffe. The fireworks will begin at 7:30 pm a bit early than River Quay Green.
Some of the best vantage points are South Bank Parklands, River Quay Green, Brennan and Apex Parks, and Suttons Beach. In these places, the event will start from 7 am to onwards. The Audience will enjoy live music concerts, games and activities, and some competitions.
Alcohol-free event. Bars and restaurants nearby.
Events starting from 11am. Live music and food stalls.
Food and drink for sale in the festival area.
Events starting from midday (fireworks at 7:30pm). Live music by local artists and beach games and activities.
Events starting at midday (fireworks at 7:30pm). Live music, performers, lamington eating competitions and beach and garden games.
If you want to go out of the hustle bustle of the city and want some peaceful enjoyment. You can spend the Australia Day at the Great Australian Bites festival. This is the second anniversary of this festival. You can enjoy different foods at Airlie Beach, Longreach, Charters Towers and Rockhampton.
You can experience the amazing new tastes of several kinds of foods. Pop-up stalls and some entertainments will also be there.
This is a famous spot for families. If you want to enjoy some fresh atmosphere, sun and some entertainments you should visit the Suttons Beach in Redcliffe.
What you can expect? well, you will have some performances, a lamington-eating contest, sand and water activities, giant Jenga, and a funny toilet race. You should take part in race lols. The beach will be more fun with some sports like beach cricket and beach volleyball.
The Australia Day in Redcliffe is organized by Moreton Bat Region Industry & Industry.
The midday time of Australia Day is the responsibility of Brennan Park on Bribie Island. Bribie Island entertains the people with some simple activities and entertainment from 12:10 pm to the 7:30 pm. After that fireworks gonna play their role.
The entertainments I have mentioned above are sack races, beach games, live music and markets galore.
If you wanna enjoy the real scenes on the sky made from fireworks water is the best place.
Almost every city prefers rivers or beaches to enjoy the fireworks. Boats are there in Brisbane. You can board a boat and sail through the Brisbane River to take pleasure from the sky. There are two main cruise companies Kookaburra River Queens and Brisbane Cruises that offer you the best seat and sail you through the different colorful sky points. Cruises also facilitate with Buffets and Live music.
Not gonna join the Brisbane institution?
This race can earn you some bucks as well as some good deeds. The Bridge Hotel has been hosting this annual cocky races from 35 years. Entry is via a gold coin and that money will be donated to the charity. Have Guts? Come in the game and show what you got.
There will be some food stalls, live bands and some other entertainments to keep you fresh. 11 am is the time to reach here for the first race.
At night, outdoor cinemas on of the best inventions humans did. Movies in the real air with stars.
All you need to do is book a seat on the green at the Moonlight Cinema. This will refresh your all day fatigue.
While the movie festival round is going on around Australia. This Australia Day the movie is gonna entertain with a screen of Red Dog at the New Farm Park outside Brisbane Powerhouse. Outside the Powerhouse, they have an outlet in Brisbane.
The summary of the events in Brisbane Australia is given below.
Australia Day at Work: Some good spirits on the day, the people decorate their offices and hosts morning teas and BBQs at the eve that leads to the Australia Day. It is a really good initiative that encourages workplaces to contribute their part in national pride.
Barbeque at South Bank: if you have not enjoyed a picnic for a long time this could be your time. Just pack the important thing and make a barbeque event early at South Back. There you will also get some great entertainments of Australia Day events.
Learn about the Local heritage at Cabool Ture Heritage Village: there are more than 70 historic buildings in the village. You can visit the village and amuse yourself with the great history. This village is one of the biggest historic cities in Brisbane. Some of the festivals in the village on Aussie's Day are Cane toad racing, wood chopping and food for the people. You can get there in just $10. Kids under 16 are free to go.
Australia Day at Victoria Park: Victoria Park is also the hub of different festive. You can expect Aussie tug a war, sack races, jumping castle and giant outdoor games. There will be much more than I described. Kids are lucky everywhere. They will receive a free Maleny Dairies soft service ice cream and watermelon slices. The Adults will be served with the Bistro blue Aussie meals.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,194
|
Kookaburra Camping and Caravan Park
Friendly camping destination in NSW New England
Gallery & info
Add a little luxury
Hit the trail
Kookaburra Camping & Caravan Park
Set on 600 acres of bushland, with walking and mountain-bike tracks leading to incredible views, massive rocky outcrops, and abundant wildlife, Kookaburra Camping and Caravan Park is a bushwalker's paradise and a birdwatcher's heaven. Stay in your own tent or caravan in the Campground Experience the luxury of our Eco Cottage Hit the trail to…
Spread out in our spacious campground, set within 600 acres of bushland! Fire pits with wood supplied 5-star amenities (including disabled) Shared shed and camp kitchen with barbecue (wood supplied) Pet-friendly, dogs allowed on-leash COST: Nightly charge of $10/adult (16 years+) $5/child (5 years+). Under 5 years free.BOOKINGS: All booking options for Kookaburra Camping & Caravan Park can be…
Kookaburra Eco Cottage is brand new and ready to be enjoyed. Complete with a standalone solar system, this latest addition to Kookaburra Park can accommodate families or couples for a special weekend retreat. Two bedrooms One queen bed, two single Sofa-bed in lounge room Slow-burning fireplace Self-catering kitchen and dining COST: From $150 for 2 x adults (extra…
With 600 acres of bushland to explore, our property is the place to climb a mountain, go birdwatching, encounter wildlife or experience the thrill of mountain biking. 5.4km (novice) and 9km (more arduous) mountain-biking trails Walking trails Dam and creek for swimming Granite outcrops and bushland views Abundant wildlife including birds
Here at Kookaburra Park we have a variety of camping and accommodation options, from tent sites, caravan and motorhome sites within our campground, to a luxury eco cottage. Bookings can be made in a variety of ways to suit you… Kookaburra Camping and Caravan ParkCOST: Nightly charge of $10/adult (16 years+) $5/child (5 years+). Under…
Located just minutes off the New England Highway on Castlerag Road Kookaburra Camping and Caravan Park is approximately halfway between the northern NSW New England towns of Glen Innes and Tenterfield. Look for the signs about 12 kilometres north of Deepwater. Spacious campground and eco cottage 970 metres above sea level Brisk winters and warm…
Duncan Macdonald is ready to welcome you to Kookaburra Park, but he's not the only one! Complete with four-legged and feathered residents, this holiday destination has elements of a classic farm-stay experience, and plenty of wildlife and birdlife on the property's 600 acres. Get to know the sheep, spot a kookaburra in the treetops, or…
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,898
|
\subsection{Shared components}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation}\\ \hline
$x$&Input $(32 \times 32 \times 1)$ & \\
1&Conv2D($4\times4$, 2, 32) & LReLU(0.1)\\
2&Batch Normalization & \\
3&
Conv2D($4\times4$, 1, 64)
& LReLU(0.1) \\
4&Batch Normalization & \\
5&
Conv2D($4\times4$, 1, 128) & LReLU(0.1) \\
6&Batch Normalization & \\
$\mu_y/\mu_x$&FC(8) from layer 6 & \\
$\sigma_y/\sigma_x$&FC(8) from layer 6 & \\
\hline
\end{tabular}
\end{center}
\caption{The encoders, $q_{\phi{_y}}(z_y|x)$ and $q_{\phi{_x}}(z_x|x)$.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation}\\ \hline
$z$&Input (16)& \\
1&FC(32768) & LReLU(0.1) \\
2&Batch Normalization & \\
3&TConv2D($4\times4$, 1, 64) & LReLU(0.1) \\
4&Batch Normalization & \\
5&TConv2D($4\times4$, 1, 32) & LReLU(0.1) \\
6&Batch Normalization & \\
$\widetilde{x}$&TConv2D($4\times4$, 2, 1) & Sigmoid \\
\hline
\end{tabular}
\end{center}
\caption{The decoder, $p_\theta(x|z_y,z_x)$.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation}\\ \hline
$z_y$&Input (8)& \\
$\widetilde{y}$&FC(10) & Softmax \\
\hline
\end{tabular}
\end{center}
\caption{The label classifier, $q_{\psi{_y}}(y|z_y)$.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l} \hline
& \textbf{Layer} & \textbf{Activation}\\ \hline
$z_x$&Input (8)& \\
$1$&FC(50) & LReLU(0.1) \\
$2$&Batch Normalization & \\
$\widetilde{y}$&FC(10) & Softmax \\
\hline
\end{tabular}
\end{center}
\caption{The adverse label classifier, $q_{\psi{_x}}(y|z_x)$.}
\end{table}
\subsection{LVAE}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation} \\ \hline
$z_{y_i}$&Input (1)& \\
$\widetilde{d}_i$&FC(2) & Softmax \\
\hline
\end{tabular}
\end{center}
\caption{The dimension-label classifiers.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation} \\ \hline
$z_{y_{ic}}$&Input (7)&\\
$1$&FC(50) & LReLU(0.1)\\
$2$&Batch Normalization & \\
$\widetilde{d}_i$&FC(2) & Softmax \\
\hline
\end{tabular}
\end{center}
\caption{The complementary dimension-label classifiers.}
\end{table}
\subsection{VAE-CE}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation} \\ \hline
$x$&Input $(32 \times 32 \times 1)$ & \\
1&
Conv2D($4\times4$, 2, 32)
& LReLU(0.1) \\
2&Batch Normalization & \\
3&Dropout(0.3) & \\
4&
Conv2D($4\times4$, 1, 64)
& LReLU(0.1) \\
5&Batch Normalization & \\
6&Dropout(0.3) & \\
7&
Conv2D($4\times4$, 1, 128)
& LReLU(0.1) \\
8&Batch Normalization & \\
9&Dropout(0.3) & \\
$real$&FC(2) & Softmax \\
\hline
\end{tabular}
\end{center}
\caption{The realism discriminator, $D$.}
\end{table}
\subsection{CD}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation} \\ \hline
$x$&Input $(32 \times 32 \times 1)$ & \\
1& Conv2D($4\times4$, 2, 32) & LReLU(0.1) \\
2&Batch Normalization & \\
3&
Conv2D($4\times4$, 1, 128)
& LReLU(0.1) \\
4&Batch Normalization & \\
$\mu_y/\mu_x$&FC(16) from layer 4 & \\
$\sigma_y/\sigma_x$&FC(16) from layer 4 & \\
\hline
\end{tabular}
\end{center}
\caption{$CD$'s encoders.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation}\\ \hline
$z$&Input (32)&\\
1&FC(32768) & LReLU(0.1) \\
2&Batch Normalization & \\
3&
TConv2D($4\times4$, 1, 32)
& LReLU(0.1) \\
4&Batch Normalization &\\
$\widetilde{x}$&
TConv2D($4\times4$, 2, 1)
& Sigmoid\\
\hline
\end{tabular}
\end{center}
\caption{$CD$'s decoder.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation} \\ \hline
$z_y$ or $z_x$&Input (16)&\\
$1$&FC(50) & LReLU(0.1)\\
$2$&Batch Normalization & \\
$\widetilde{y}$&FC(10) & Softmax \\
\hline
\end{tabular}
\end{center}
\caption{$CD$'s label-disentanglement classifiers.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
& \textbf{Layer} & \textbf{Activation}\\ \hline
$|z_{y_a} - z_{y_b}|$&Input (16)& \\
$1$&FC(50) & LReLU(0.1) \\
$2$&Batch Normalization & \\
$3$&Dropout(0.3) & \\
$change$&FC(2) & Softmax \\
\hline
\end{tabular}
\end{center}
\caption{The latent-change discriminator, $DISC$.}
\end{table}
\subsection{Learning a data representation for explanation}\label{ss:m:base} %
To represent the data in a higher-level space, we use a VAE\cite{kingma2013auto,rezende2014stochastic}. A VAE aims to approximate a dataset's distribution under the assumption that its samples $x$ are generated according to some latent variable $z$. In other words, the aim is to model $p(x, z) = p(x|z)p(z)$. This relation is approximated using an encoder $q_\phi(z|x)$ and decoder $p_\theta(x|z)$ distribution, parameterized by deep neural networks, and optimized using a lower bound on the true likelihood of the data, the \textit{ELBO}. The reparametrization trick\cite{kingma2013auto} is used to (back)propagate through %
the latent variables.
Using a VAE we can both infer latent variables $z$ given data $x$, and generate modified samples $\widetilde{x}$ given some modification in $z$. It provides us with the tools to work in concept domain $C$, for both classification and explanation purposes.
However, not all information in $x$, and consequently in $z$, is necessarily class related. To overcome this issue we build upon work aimed at disentangling class-relevant from irrelevant information in a VAEs latent representation.
The VAE's \textit{ELBO} objective is extended with classification terms,
in line with works such as \cite{cai2019learning,ding2020guided,ilse2020diva,zheng2019disentangling}. Latent variable $z$ is split into subspaces $z_y$ and $z_x$, where the former aims to contain class-relevant information and the latter should contain the remaining information. We use a separate encoder for inferring each latent subspace; the $z_y$ encoder, $q_{{\phi{_y}}}(z_y|x)$, serves as the concept encoder, $f_c$.
We introduce categorical distributions $q_{{\psi{_y}}}(y|z_y)$ and $q_{{\psi{_x}}}(y|z_x)$, parameterized by neural networks and optimized using their log-likelihoods. We refer to these as the latent spaces' classifiers. The former, $q_{{\psi{_y}}}(y|z_y)$, is also used to infer class predictions, serving as $f_y$.
For training, we simultaneously optimize the parameters of both classifiers and both encoders using categorical cross-entropy. However, $z_x$ should contain little information about label $y$. To learn such a label-agnostic subspace we reverse the loss' gradients for $z_x$'s encoder, $q_{{\phi{_x}}}(z_x|x)$, through a Gradient Reversal Layer\cite{ganin2015unsupervised}.
For each loss term, the subscript denotes the parameters it optimizes. The loss terms are as follows:
\begin{alignat}{1}
\mathcal{L}_{\theta,{\phi{_y}},{\phi{_x}},{\psi{_y}}}&(x, y) = \beta_y KL(q_{{\phi{_y}}}(z_y|x)||p_\theta(z))\label{eq:kl0}\\
&+\beta_x KL(q_{{\phi{_x}}}(z_x|x)||p_\theta(z))\label{eq:kl1}\\
&-\mathbb{E}_{q_{{\phi{_y}}}(z_y|x),q_{{\phi{_x}}}(z_x|x)}[\log p_\theta(x|z_y,z_x)]\label{eq:rec}\\
&-\alpha \mathbb{E}_{q_{\phi{_y}}(z_y|x)}[\log(q_{{\psi{_y}}}(y|z_y))]\label{eq:cl0}\\
&+\alpha \mathbb{E}_{q_{\phi{_x}}(z_x|x)}[\log(q_{{\psi{_x}}}(y|z_x))],\label{eq:cl1}\\
&\hspace{-.576cm}\mathcal{L}_{\psi{_x}}(x, y) = -\mathbb{E}_{q_{\phi{_x}}(z_x|x)}[\log(q_{{\psi{_x}}}(y|z_x))],\label{eq:cl1_adv}
\end{alignat}
with hyperparameters $\beta_y$, $\beta_x$ and $\alpha$. We approximate all expectations with single-sample Monte Carlo estimation. Prior distribution $p_\theta(z)$ is set to a standard factorized Gaussian, $\mathcal{N}{(0, I)}$, which allows us to compute (\ref{eq:kl0}) and (\ref{eq:kl1}) analytically\cite{kingma2013auto}. Distribution $p_\theta(x|z_y, z_x)$ is assumed to be a factorized Gaussian with fixed variance, allowing us to approximate (\ref{eq:rec}) by taking the squared error between the input and its reconstruction. (\ref{eq:cl0}), (\ref{eq:cl1}) and (\ref{eq:cl1_adv}) optimize the log-likelihood of the categorical distributions and are computed using categorical cross-entropy. Note that (\ref{eq:cl1}) is a negation of (\ref{eq:cl1_adv}): Both are computed in a single pass. An overview of the model is depicted in Fig. \ref{fig:vaedis}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.35\textwidth]{fig/vaedis_s.pdf}
\caption{The architecture of the disentangled VAE. Datapoint $x$ is encoded by two separate encoders into $z_x$ and $z_y$, which are concatenated to reconstruct $\widetilde{x}$. Disentanglement is encouraged by auxiliary classifiers. We omit the sampling procedure of the latent variables for clarity.}
\label{fig:vaedis}
\end{figure}
\subsection{Pair-based dimension conditioning}\label{ss:m:pair}
To produce explanations that convey differences in class concepts, we must manipulate concepts individually. To exercise such control, we aim to learn a representation where individual $z_y$-dimensions control individual concepts. We introduce a new disentanglement method based on two assumptions: (1) a significant change in a single latent dimension should correspond to changing a single concept and (2) we can train a model to evaluate whether changes fit this criterion. This method acts as additional regularization and is added on top of the previously described objective.
Two auxiliary models are used to aid the regularization procedure: A `Change Discriminator' ($CD$) and a regular `Discriminator' ($D$), both predicting a value in the range [0, 1]. $CD$ is trained beforehand, and infers whether a pair of datapoints exhibits a desirable change. In our implementation, we train $CD$ as a binary classifier with pairs that either indicate a good change (a single concept change) or a bad change (no or multiple concept changes); for details we refer to the supplementary material. %
$D$ is trained to distinguish between generated and real datapoints, as done in a GAN\cite{goodfellow2014generative}.
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth]{fig/cpair_s.pdf}
\caption{Individual dimensions are disentangled in an amortized fashion: Randomly constructed latent spaces differing in a single dimension are optimized to exhibit a desirable change in data space.}
\label{fig:cpair}
\end{figure}
By optimizing latent-dimension changes using $CD$ as a critic, individual dimensions should better represent single concepts. $D$ is used to optimize the quality of the samples to avoid a degenerate solution where non-realistic changes are produced that merely trick $CD$, rather than representing meaningful concept changes (\ie an adversarial attack\cite{szegedy2013intriguing}).
A visualization of the regularization procedure is depicted in Fig.~\ref{fig:cpair}. One step works as follows:
\begin{enumerate}
\item Encode two arbitrary (non-identical) datapoints $x_a$ and $x_b$ to their latent representations in $z_y$-space, giving us $z_{y_a}$ and $z_{y_b}$. For the remaining information only encode the representation of datapoint $x_b$ to $z_{x}$. %
\item Construct two latent variables that share all but one dimension by combining $z_{y_a}$ and $z_{y_b}$ stochastically. We denote these variables as $z_{p_a}$ and $z_{p_b}$. Each individual dimension comes from either $z_{y_a}$ or $z_{y_b}$ (equally likely), and all but one dimension are shared.
\item Map the constructed pair back to data space. That is, synthesize $\widetilde{x}_{p_a}$ and $\widetilde{x}_{p_b}$ by decoding latent representations $(z_{p_a},z_x)$ and $(z_{p_b},z_x)$.
\item Optimize the encoders and the decoder such that $CD$ predicts a high-quality change between $\widetilde{x}_{p_a}$ and $\widetilde{x}_{p_b}$ and $D$ predicts that the samples are real.
\end{enumerate}
The corresponding loss term is as follows:
\begin{alignat}{1}
\hspace{0.1cm}\mathcal{L}_{\theta,{\phi{_y}},{\phi{_x}}}(\widetilde{x}_{p_a}, \widetilde{x}_{p_b}) &= {-}\alpha_r\log(D(\widetilde{x}_{p_a}))\\ &\hspace{0.465cm}{-}\alpha_r\log(D(\widetilde{x}_{p_b}))\\
&\hspace{-1.5cm}+ \alpha_{p} n_y\frac{|z_{p_a}-z_{p_b}|}{|z_{y_a}-z_{y_b}|} \cdot - \log(CD(\widetilde{x}_{p_a}, \widetilde{x}_{p_b})),
\end{alignat}
with hyperparameters $\alpha_r$ and $\alpha_p$, and $n_y$ denoting the number of dimensions in $z_y$. This term optimizes the VAE such that $CD$ and $D$ predict high-quality changes and realistic datapoints. We scale the loss of $CD$'s prediction according to the difference in the dimension compared to the overall difference, multiplied by the number of dimensions. This extra scalar term ensures that we do not penalize `bad' changes when the differing dimension is insignificant.
Discriminator $D$ is trained in the same manner as a GAN's discriminator, using $\widetilde{x}_{p_a}$ and $\widetilde{x}_{p_b}$ as fake data alongside real data from the training set; it learns to distinguish between them by minimizing the binary cross-entropy between the predicted labels and true/false labels.
\subsection{Explanation generation}\label{ss:m:gen}
To explain a datapoint we focus on two aspects: Identifying a suitable exemplar and producing an explanation that displays the class concepts that differ between the datapoint and this exemplar. The exemplar is chosen from an alternative class, \eg the second most likely class (given $q_{{\psi{_y}}}(y|z_y)$) or user selected. Alternatively, one could select a specific datapoint. An overview of the explanation procedure is provided in Fig.~\ref{fig:mo}. When creating explanations we use mean values, rather than samples, of latent variable $z$. As such, we substitute $z$ for $\mu$ in this subsection.
\textbf{Exemplar identification} rests on two principles: (1) how representative a datapoint is of its class and (2) how similar it is to the datapoint we contrast it with (as more similarity implies fewer concepts to change). To capture the former we only consider datapoints whose class probability is above a given threshold: $q_{{\psi{_y}}}(y_i|\mu_y) > t$. For the latter, we select the datapoint with the minimum squared difference in the class-specific subspace: $\min\limits_{b} \ (\mu_{y_a} - \mu_{y_b})^2$, with $a$ indicating the query datapoint and $b$ the exemplar.%
\textbf{Explanation generation} works by transforming the class-relevant latent embedding from the query (${\mu_{y_a}}$) to the exemplar (${\mu_{y_b}}$) and showcasing the intermediate steps; the class-irrelevant embedding (${\mu_{x_a}}$) is left unchanged. Dimension values are changed at once, as dimensions represent individual concepts. For each interpolation step, we allow multiple such dimension values to be switched, as there is no guarantee that every dimension difference depicts a concept changing (\ie small differences are likely---but not necessarily---meaningless). We consider all orders of changing (groups of) dimensions; as dimensions can still be entangled, the interpolation path can have a significant effect on the quality of the intermediate states\cite{chen2019homomorphic,yan2020semantics}.
The path we take to interpolate from ${\mu_{y_a}}$ to ${\mu_{y_b}}$ should be of minimum length, in line with the Minimum Description Length (MDL)\cite{grunwald2007minimum} principle. Additionally, it is optimized \wrt two aspects: (1) each step should depict a single concept change and (2) each state should represent the dataset's underlying distribution. These properties are optimized using auxiliary models $CD$ and $D$.%
Not all interpolation paths are explicitly computed, as the quantity of paths changing (groups of) dimensions grows extremely fast\footnote{Equivalent to the number of weak orderings of a set: Given $n$ latent dimensions, the $n^{th}$ Ordered Bell number\cite{mezo2019combinatorics}.}. Rather, we build a graph denoting all paths, where each edge denotes the cost of adding this state to the interpolation: A weighted sum of the probabilities of the change being undesirable ($CD$) and the datapoint being fake ($D$), adjusted by a normalization coefficient. For the change from $\mu_i$ to $\mu_j$ this can be computed as follows:
\begin{alignat}{1}
w_{ij} = {[\alpha \big(1-D(\widetilde{x}_j)\big) + \beta \big(1 - CD(\widetilde{x}_i, \widetilde{x}_j)\big)]} \cdot {k^\gamma},
\end{alignat}
where $\widetilde{x}_i$ and $\widetilde{x}_j$ are the reconstructed datapoints of states $i$ and $j$, $k$ is the number of dimensions changed, and $\alpha$, $\beta$, and $\gamma$ are hyperparameters. The shortest path in this graph represents the interpolation path optimized for our desiderata. An example of an interpolation graph is depicted in Fig.~\ref{fig:graph}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.24\textwidth]{fig/expl_s.pdf}
\caption{The interpolation graph of the transition between two latent variables of size 3 (weights omitted for clarity).}
\label{fig:graph}
\end{figure}
While the shortest path can be found in linear time \wrt the nodes and edges (since the graph is directed and acyclic\cite{cormen2009algo}), the graph itself grows quickly. For $n$ dimensions to change there are $2^n$ nodes and $3^n-2^n$ edges (we refer to the supplementary material for a derivation). As such, this approach is only applicable to problems with a limited number of dimensions.
\subsection{Loss functions}
First, we briefly restate the loss functions. All methods extend DVAE: We sum the DVAE loss and the method-specific loss. Note that $CD$ also extends DVAE.
\textbf{DVAE}'s loss denotes the class-disentangled VAE optimization, as described in \S{3.1} of the main paper:
\begin{alignat}{1}
\mathcal{L}_{\theta,{\phi{_y}},{\phi{_x}},{\psi{_y}}}&(x, y) = \beta_y KL(q_{{\phi{_y}}}(z_y|x)||p_\theta(z))\label{eq:kl0}\\
&+\beta_x KL(q_{{\phi{_x}}}(z_x|x)||p_\theta(z))\label{eq:kl1}\\
&-\mathbb{E}_{q_{{\phi{_y}}}(z_y|x),q_{{\phi{_x}}}(z_x|x)}[\log p_\theta(x|z_y,z_x)]\label{eq:rec}\\
&-\alpha \mathbb{E}_{q_{\phi{_y}}(z_y|x)}[\log(q_{{\psi{_y}}}(y|z_y))]\label{eq:cl0}\\
&+\alpha \mathbb{E}_{q_{\phi{_x}}(z_x|x)}[\log(q_{{\psi{_x}}}(y|z_x))],\label{eq:cl1}\\
&\hspace{-.576cm}\mathcal{L}_{\psi{_x}}(x, y) = -\mathbb{E}_{q_{\phi{_x}}(z_x|x)}[\log(q_{{\psi{_x}}}(y|z_x))],\label{eq:cl1_adv}
\end{alignat}
\textbf{LVAE}'s loss optimizes individual dimensions using auxiliary classifiers. We denote dimension labels as $d_i$, individual dimensions as $z_{y_i}$. and the complementary dimensions (all dimensions but $i$) as $z_{y_{ci}}$. We use a classifier for each label $i$, denoted as categorical distribution $q_{\psi_{di}}(d_i|z_{y_i})$, and an adversarial classifier for the complementing dimensions $ci$, denoted as categorical distribution $q_{\psi_{dci}}(d_i|z_{y_{ci}})$. $n_c$ denotes the number of concepts. The extra loss terms can be denoted as follows:
\begin{alignat}{1}
\mathcal{L}_{\theta,\phi{_y},\psi_{di}}(x, y, d_i)&= -\alpha_d {\log(q_{\psi_{di}}(d_i|z_{y_i}))} \\
&+ \alpha_d \log(q_{\psi_{dci}}(d_i|z_{y_{ci}})), \\
\mathcal{L}_{\psi_{dci}} (x, y, d_i)&= -\log(q_{\psi_{dci}}(d_i|z_{y_{ci}})),\\
\mathcal{L}_{LVAE}(x, y, d) &= \sum_{i}^{n_c} \big(\mathcal{L}_{\theta,\phi{_y},\psi_{di}}(x, y, d_i) \\
&\hspace{.809cm} + \mathcal{L}_{\psi_{dci}} (x, y, d_i)\big).
\end{alignat}
\textbf{VAE-CE}'s loss considers the pair-based dimension conditioning procedure as described in \S3.2 of the main paper. We create samples $\widetilde{x}_{p_a}$ and $\widetilde{x}_{p_b}$ (from datapoints that are used in the DVAE objective) and optimize the main VAE using $CD$ and $D$. Additionally, we train $D$ to distinguish between real/fake datapoints as a binary classification task, using datapoint-label pairs ($x$, $y_{d}$). We either use the synthesized datapoints and a 0-label, or training datapoints and a 1-label. The loss can be denoted as follows:
\begin{alignat}{1}
\hspace{0cm}\mathcal{L}_{\theta,{\phi{_y}},{\phi{_x}}}(\widetilde{x}_{p_a}, \widetilde{x}_{p_b}) &= {-}\alpha_r\log(D(\widetilde{x}_{p_a}))\\ &\hspace{0.465cm}{-}\alpha_r\log(D(\widetilde{x}_{p_b}))\\
&\hspace{-1.6cm}+ \alpha_{p} n_y\frac{|z_{p_a}-z_{p_b}|}{|z_{y_a}-z_{y_b}|} \cdot - \log(CD(\widetilde{x}_{p_a}, \widetilde{x}_{p_b})),\\
\mathcal{L}_{D}(x, y_d) &= -\log(D({y}_{d}|x)).
\end{alignat}
\textbf{GVAE} and \textbf{ADA-GVAE} are optimized using the $ELBO$, \ie equations (\ref{eq:kl0}), (\ref{eq:kl1}), and (\ref{eq:rec}). We use specific pairings of datapoints and average out dimensions. Since these datapoints might not have class labels, we compute their loss \wrt the $ELBO$ in a separate pass.
\textbf{CD}'s loss optimizes the change-discrimination objective. We denote the change-quality label as $y_{cd}$, and infer a change-quality prediction of a pair $(x_a, x_b)$. The resulting loss can be denoted as follows:
\begin{alignat}{1}
&\hspace{-.16cm}{CD}(x_a, x_b) = DISC(|q_{{\phi_y}}(z_{y_a}|x_a)-q_{{\phi_y}}(z_{y_b}|x_b)|), \\
&\hspace{-.16cm}\mathcal{L}_{\phi{_y},DISC}(x_a, x_b, y_{cd}) = -\alpha_c\log(CD(y_{cd}|x_a, x_b)).
\end{alignat}
Finally, we note that for models using multiple training passes with uneven numbers of datapoints (GVAE, ADA-GVAE, and CD), we scale the respective losses by this ratio in order to balance the different passes.
\subsection{Hyperparameter optimization}
The settings shared between all training procedures are depicted in Table~\ref{ap:hpsh}. We tune the hyperparameters as follows. First, we identify a non-degenerate solution by hand, defined as a solution where none of the objectives are ignored (\ie no collapsed latent spaces, uninformative classifiers, or all-zero outputs). Next, we define a range of parameters around this solution, and explore all configurations in this range. The hyperparameters are provided in Table~\ref{ap:hp}, whereas their explored values are denoted in Table~\ref{ap:hpv}.
We use the $eac$ for model selection, using 90 ($a, b$) pairs (note that these pairs are distinct from the pairs used for the final results). We generate interpolations using all methods ($sm$, $dim$, and $graph$ for VAE-CE) and take minimum $eac$ out of these. As computing the $eac$ requires access to the ground-truth generating process, we search for hyperparameters using the synthetic data. The identified parameters are also used for the MNIST models.
Each model is trained for $\approx$ \num{2000000} steps: 20 epochs on the synthetic dataset and 33 epochs on MNIST. We found that training models with fewer steps generally resulted in worse $eac$-values, whereas training for significantly longer did not improve (and sometimes regressed) the $eac$-score.
For each model type we consider the Cartesian product of the hyperparameter values denoted in Table~\ref{ap:hpv}. We train each configuration four times, giving us a total of 176 models. For each configuration we compute the average $eac$ and select the hyperparameters corresponding to the lowest cost. These identified hyperparameters are depicted in Table~\ref{ap:hpf}. An overview of all model runs is provided in Fig.~\ref{fig:allplots}.
$CD$ is trained beforehand. For simplicity, we used a single configuration that gave a satisfactory test-set change-pair accuracy ($96.4\%$ on the synthetic data and $87.1\%$ on MNIST-pairs). This configuration is depicted in Table~\ref{ap:hpcd}. $CD$ was trained for \num{5000000} steps (for both datasets).
\begin{table}[H]
\begin{center}
\begin{tabular}{l l}\hline
\textbf{Setting}& \textbf{Value}\\ \hline
Batch size & 128 \\
Optimizer & Adam\cite{kingma2014adam} \\
Adam: learning rate & 0.001\\
Adam: $\beta_1$ & 0.9\\
Adam: $\beta_2$ & 0.999\\
Adam: $\epsilon$ & 0.0001\\
\hline
\end{tabular}
\end{center}
\caption{Shared settings.}\label{ap:hpsh}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
\textbf{}& \textbf{Meaning} & \textbf{Model} \\ \hline
$\beta_y$ & $z_y$ KL divergence weight & $all$ \\
$\beta_x$ & $z_x$ KL divergence weight & $all$ \\
$\alpha$ & class-label classification weight & $all$ \\
$\alpha_d$ & per-dimension classification weight & LVAE \\
$\alpha_r$ & $D$-prediction weight & VAE-CE \\
$\alpha_p$ & $CD$-prediction weight & VAE-CE \\
\hline
\end{tabular}
\end{center}
\caption{All hyperparameters.}\label{ap:hp}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
\textbf{Model} & \textbf{} & \textbf{Values} \\ \hline
DVAE & $\beta_y$ & \{2, 4\} \\
& $\alpha$ & \{5, 10, 15\} \\ \hline
LVAE & $\beta_y$ & \{1, 2\} \\
& $\alpha$ & \{5, 7\} \\
& $\alpha_d$ & \{20, 25, 30\} \\ \hline
GVAE & $\beta_y$ & \{1, 2, 4\} \\
& $\alpha$ & \{2, 4, 6\} \\ \hline
ADA-GVAE & $\beta_y$ & \{1, 2, 4\} \\
& $\alpha$ & \{1, 2, 4\} \\ \hline
VAE-CE & $\beta_y$ & \{2, 4\} \\
& $\alpha$ & \{5, 7\} \\
& $\alpha_p$ & \{3, 5\} \\
\hline
\end{tabular}
\end{center}
\caption{All explored hyperparameter values. Parameters not mentioned are set to 1. The Cartesian product of the per-parameter values denotes all configurations we train.}\label{ap:hpv}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
\textbf{Model}& \textbf{} & \textbf{Value} \\ \hline
DVAE & $\beta_y$ & 2 \\
& $\alpha$ & 10 \\ \hline
LVAE & $\beta_y$ & 1 \\
& $\alpha$ & 7 \\
& $\alpha_d$ & 20 \\ \hline
GVAE & $\beta_y$ & 1 \\
& $\alpha$ & 6 \\ \hline
ADA-GVAE & $\beta_y$ & 1 \\
& $\alpha$ & 4 \\ \hline
VAE-CE & $\beta_y$ & 2 \\
& $\alpha$ & 7 \\
& $\alpha_p$ & 3 \\
\hline
\end{tabular}
\end{center}
\caption{The selected hyperparameter values.}\label{ap:hpf}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{l l l}\hline
\textbf{}& \textbf{Meaning} &\textbf{Value} \\ \hline
$\beta_y$ & $z_y$ KL divergence weight & 1\\
$\beta_x$ & $z_x$ KL divergence weight & 0.5\\
$\alpha$ & class-label classification weight & 16 \\
$\alpha_c$ & $DISC$ classification weight & 50\\
\hline
\end{tabular}
\end{center}
\caption{$CD$ hyperparameters.}\label{ap:hpcd}
\end{table}
\begin{figure*}
\centering
\subfloat[DVAE\label{fig:advae}]{\includegraphics[height=.32\textwidth]{fig/dvae.pdf}}
\subfloat[LVAE\label{fig:alvae}]{\includegraphics[height=.32\textwidth]{fig/lvae.pdf}}
\\
\centering
\subfloat[GVAE\label{fig:agvae}]{\includegraphics[height=.32\textwidth]{fig/gvae.pdf}}
\subfloat[ADA-GVAE\label{fig:adavae}]{\includegraphics[height=.32\textwidth]{fig/adagvae.pdf}}
\\
\centering
\subfloat[VAE-CE\label{fig:avaece}]{\includegraphics[height=.32\textwidth]{fig/vaece.pdf}}
\caption{The minimum $eac$ of each model-run (on validation data). The x-axis depicts the different configurations, whereas the y-axis depicts the $eac$ (lower is better).}%
\vspace{3cm}
\label{fig:allplots}%
\end{figure*}
\subsection{Datasets}\label{ss:s:data}
\begin{figure}[!b]
\centering
\subfloat[The underlying concepts determining a datapoints' class.]{{\label{fig:synb}
\adjincludegraphics[trim={0 {0.05\height} 0 {0.04\height}},clip,width=.85\linewidth]{fig/syn_concept.pdf}
}} \\
\vspace{-.1cm} %
\subfloat[The ten classes in the dataset. The value above depicts the class index, whereas the value below depicts the indices of the lines that determine it.]{\label{fig:sync}
\raisebox{.1cm}[2cm][0cm]{
{\adjincludegraphics[trim={0 {0.022\height} 0 {0.025\height}},clip,width=1\linewidth]{fig/syn_classes.pdf} }}}
\caption{An overview of the synthetic data's structure.}%
\label{fig:synfactor}%
\end{figure}
\textbf{Synthetic data} with a known generating process and set of concepts is used to validate our method in a controlled setting. The class determines the datapoints' concepts, which together with added noise determine the datapoint. Concepts are defined as the occurrence of lines, where each line is defined by its orientation, length, and relative position. We use eight variables determining whether a specific line occurs in the data. %
The dataset consists of ten classes, with each class consisting of some combination(s) of lines. %
These lines and classes are depicted in Fig.~\ref{fig:synfactor}.
Datapoints are generated by taking these `base shapes' and adding non-trivial noise. The noise process seeks to mimic that of handwritten shapes (such as MNIST digits) and consists of shape distortion and line-width variation. We refer to the supplementary material for a detailed description of this generation procedure. Examples of synthetic datapoints are depicted in Fig.~\ref{fig:synd}.
The training and test set consist of \num{10000} and \num{1000} $32\times32$-pixel images for each class, respectively. Model selection is done according to an explanation-quality metric that samples directly from the generative process (see \S\ref{ss:s:eval}), no validation set is used for tuning the model. Change pairs (for $CD$) are created by taking a class configuration and hiding some line(s) in both images in the pair, such that only 1 (positive) or 0/2+ lines differ (negative). Examples of such pairs are depicted in Fig~\ref{fig:syncp}. Supervision used by other methods can be created using knowledge of the generative process. For each type of supervision we generate the same number of samples in total, \num{100000}.
\begin{figure}
\centering
\subfloat[10 synthetic datapoints.]{\label{fig:synd}
\raisebox{.1cm}{
\adjincludegraphics[width=.4\linewidth]{fig/ex_syn.pdf}
}
}
\hspace{.05\linewidth}
\subfloat[Synthetic change pairs.]{\label{fig:syncp}
\raisebox{.57cm}[0cm][0cm]{
\makebox[.4\linewidth][c]{
\begin{tabular}{l|l}
\adjincludegraphics[width=.15\linewidth,trim={0 {.33\height} 0 0},clip]{fig/ex_pair_p.pdf} & \adjincludegraphics[width=.15\linewidth,trim={0 0 0 {.33\height}},clip]{fig/ex_pair_n.pdf}
\end{tabular}
}
}
}
\\
\vspace{-.15cm} %
\subfloat[10 samples from MNIST.]{\label{fig:mnistd}
\raisebox{.1cm}{
\adjincludegraphics[width=.4\linewidth]{fig/ex_mnist.pdf}
}
}
\hspace{.05\linewidth}
\subfloat[MNIST change pairs.]{\label{fig:mnistcp}
\raisebox{.57cm}[0cm][0cm]{
\makebox[.4\linewidth][c]{
\begin{tabular}{l|l}
\adjincludegraphics[width=.15\linewidth,trim={0 {.33\height} 0 0},clip]{fig/ex_pair_mnist_p.pdf} & \adjincludegraphics[width=.15\linewidth,trim={0 0 0 {.33\height}},clip]{fig/ex_pair_mnist_n.pdf}
\end{tabular}
}
}
}
\caption{Synthetic data and MNIST, as used for training. Change pairs depicted on the left are positive (1 change), whereas those on the right are negative (0/2+ changes).}%
\label{fig:synex}%
\end{figure}
\textbf{MNIST}\cite{lecun1998gradient} is used to evaluate our method in a more realistic setting, \ie with noisy supervision.
For ease of implementation, all images are padded to $32\times32$ pixels.
No ground-truth concepts are available for MNIST. Consequently, we can only evaluate methods for which we can approximate the required supervision, and cannot evaluate metrics requiring ground-truth concept labels. %
To create change pairs, images are augmented according to the notion that the concepts we reason with are continuous lines. Digits are reduced to individual lines and pixels are clustered according to these lines (we refer to the supplementary material for details). Using this line split, pairs are created that exhibit 1 (positive) or 0/2+ (negative) line changes. We create as many augmented pairs as there are training datapoints: \num{60000}. Examples of MNIST datapoints and change pairs are depicted in Figs.~\ref{fig:mnistd} and~\ref{fig:mnistcp}. Creating a labeling of line types is a significantly more challenging task than augmenting individual images to create change pairs. As such, we do not consider methods requiring such supervision when evaluating MNIST.
\subsection{Considered evaluations}\label{ss:s:eval}
\textbf{Explanation alignment cost (\textit{eac})}. To the best of our knowledge there is no method for quantitatively evaluating explanations of our defined structure. As such, we introduce the explanation alignment cost ($eac$). The $eac$ seeks to quantify the quality of a contrastive explanation based on a pair of datapoints $a$ and $b$ as input. The explanation consists of an interpolation starting at datapoint $a$, gradually transitioning to the \textit{class-relevant} concepts of $b$ (\ie the final state of the transition is not necessarily identical to $b$). Each step should indicate a single concept being changed. %
A candidate explanation for ($a, b$) is evaluated according to the cost of aligning it to a ground-truth explanation. We define a ground-truth explanation as a minimum length sequence starting at $a$, with each subsequent state changing a single concept from $a$ to $b$,
with no other changes. The last state depicts a datapoint with all class-relevant concepts from $b$ and the remaining information from $a$.
The alignments we identify must map every state in the candidate explanation to at least one state in the ground-truth explanation, and vice versa. Additionally, we constrain this mapping such that both aligned sequences are increasing. Such an alignment can be computed using Dynamic Time Warping (DTW)\cite{senin2008dynamic} in $O(nm)$ time (with $n$ and $m$ denoting the length of the explanations). We compute the cost of each individual state-to-state mapping as the per-pixel squared error and a constant, for discouraging (empty) repetitions in the alignment: $(x_c - x_t)^2 + \epsilon$,
with $x_c$ and $x_t$ as states of the candidate and true explanation, and $\epsilon = .001$.
We compute this cost for all possible ground-truth explanations ($n!$ orders, given $n$ concepts to change) and take the minimum alignment cost as the $eac$. For evaluating the $eac$ on the synthetic data, we compute the $eac$ for 90 generated ($a, b$) pairs and report the average $eac$.%
\textbf{Representation quality metrics}. Additionally, we explore the (adverse) effects of the conditioning methods on the learned representations. To quantify concept-disentanglement, the mutual information gap ($mig$)\cite{chen2018isolating} is used. We estimate the $mig$ for the class concepts in $z_y$ following the same procedure as \cite{locatello2019challenging}. The \textit{ELBO} metrics are also evaluated, denoted as $rec$ (reconstruction error), $kl_y$, and $kl_x$ (KL divergences of the subspaces). The classification accuracy, using the learned distribution $q_{{\psi{_y}}}(y|z_y)$, is denoted as $acc$. Finally, we evaluate the disentanglement of the latent subspaces \wrt class information, by training logistic regression classifiers on the latent space embeddings. Their accuracies are denoted as $l$-$acc_y$ and $l$-$acc_x$. %
\input{content/figure/only}
\textbf{Other evaluations}. We evaluate the exemplar identification by checking whether datapoints with more common concepts are more likely to be chosen. Class 9 has 2 variations, of which one variant has more concepts in common with classes 7 and 8. The remaining classes have the same number of concepts in common with both variants. We query for exemplars using 2000 test samples and compare the probability of selecting the more common variant using classes 7 and 8 to the probability when using other classes. Also, we qualitatively analyze the explanations, using both single datapoints to explain and input pairs to contrast. %
\subsection{Comparison overview}\label{ss:s:comp}
We compare VAE-CE to methods with similar capabilities, staying within the domain of VAE-based representation methods. The model described in \S\ref{ss:m:base} forms the baseline. We compare a set of alternative approaches to regularizing the $z_y$-space, alongside other interpolation approaches.
\textbf{Concept-disentanglement methods}. For each disentanglement approach we denote how we refer to it, alongside a short summarization of the regularization procedure and supervision. Some details differ from the original approaches as we adapt them to disentangle single dimensions and to be able to compare different types of supervision.
\textbf{DVAE} denotes the baseline model (\S\ref{ss:m:base}). \textbf{LVAE} denotes an extension of label-based disentanglement as described in \S\ref{ss:m:base}. For each $z_y$-dimension a label is provided indicating whether a concept is present. Each dimension is disentangled by two auxiliary classifiers, one predicting the label from the dimension value and one predicting the label from the remaining $z_y$ dimensions. The latter objective's gradients are reversed for the encoders. \textbf{GVAE} denotes an adaption of \cite{hosoya2019group} using pairs of datapoints with (at least) one specified matching concept. The inferred values for the $z_y$-dimension corresponding to this concept are averaged out, forcing this information to be shared through optimizing the \textit{ELBO}. \textbf{ADA-GVAE} denotes an adaption of \cite{locatello2020weak} that uses positive change pairs as supervision, allowing us to compare to a method using similar supervision. Training is done using pairs of datapoints that differ in a single concept. We infer latent dimensions for both datapoints and average all but one dimension between the pair. The independent dimension is selected as the dimension with the highest KL divergence (between the pair). Optimization is again done using the \textit{ELBO}. \textbf{VAE-CE} denotes our method (\S\ref{s:m}).
\textbf{Model implementations}. All methods share the same encoder and decoder architecture, and have a dimensionality of 8 for both $z_x$ and $z_y$. Hyperparameters are optimized using the $eac$ on a validation set of explanation pairs using synthetic data. As this cannot be evaluated for MNIST we use the same hyperparameters as chosen for the synthetic data; this approach resulted in reasonable models since the synthetic data was designed to share characteristics with MNIST. For details on architectures, training, and hyperparameters we refer to the supplementary material.
\textbf{Interpolation methods}. To evaluate the graph-based explanation approach, we also consider two naïve approaches to creating explanations. First, a smooth interpolation (denoted as $sm$), where each intermediate state of $z_y$ is a convex combination of $z_{y_a}$ and $z_{y_b}$. All dimensions are adjusted at once according to a predefined number of steps, in equal proportion for each step. We use five interpolation states. Second, a dimension-wise interpolation (denoted as $dim$), where we identify significantly differing dimensions with a simple heuristic: $|z_{y_{a_i}} - z_{y_{b_i}}| > 1$ (the $\sigma$ of the prior). All significantly different dimensions are changed one at a time, in arbitrary order. The non-significant dimensions are changed at once, in the first step. Finally, for the graph-based interpolation (denoted as $graph$), we use explanation parameters $t=.95$ (exemplar threshold), $\alpha=.5$ (realism), $\beta=1$ (change quality), and $\gamma=1$ (normalization).
\input{content/table/repr}
\section{Introduction}
\input{content/introduction}
\section{Related work}
\input{content/figure/method_fig} %
\input{content/related}
\section{Method: VAE-CE}\label{s:m}
\input{content/method}
\section{Experimental setup}
\input{content/setup}
\section{Results}
\input{content/results}
\section{Conclusions}
\input{content/conclusion}
\newpage
{\small
\bibliographystyle{ieee_fullname}
\section{Change Discriminator ($\bf{{CD}}$)}
\input{content/cd}
\section{Interpolation-graph complexity}
\input{content/graph}
\section{Synthetic data generation}
\input{content/syn}
\section{MNIST line-augmentation}
\input{content/mnist}
\section{Model architecture}
\input{content/model}
\section{Hyperparameters and training}
\input{content/params}
\newpage
{\small
\bibliographystyle{ieee_fullname}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 582
|
WhatsApp is social messaging application that can be used on Android Smartphone as well as other devices. It is nicely designed and programmed by highly expert software professionals. It's most notable traits is its simple yet effective graphical user interface. WhatsApp has become popular among the several users across the globe for chatting to one's dear and near ones. At its core it allows you to send and receive pictures, audio, video, contacts and text messages. This application uses your internet connection for sending and receiving messages. If you are using WhatsApp since a long time then a history of all your chats will be stored on your Samsung Galaxy S3. If you are using WhatsApp since long time, then over the use of this app you might have come across a situation where a lot of data has been accumulated in history causing slow performance of your phone as well as WhatsApp application. If you are witnessing this problem and looking for some reliable solution, then do not panic. Here is the most recommended software famous as Remo MORE that can solve all your questions such as "how to delete WhatsApp chat history on Samsung Galaxy S3?" Or "Is there any Samsung Galaxy S3 WhatsApp chat history cleaner?" Before we get into discussion of its features, let me take you through the problem caused due to Chat history of WhatsApp.
What are the problems caused due to presence of WhatsApp chat history on Samsung Galaxy S3?
There are many problems that you may face due to accumulation of chat history on your Samsung Galaxy S3. One of those is memory space scarcity. All the chat data is stored on your Smartphone memory, this memory is limited and if it is consumed for storing chat history then you might run out of memory space to store other useful things on your Smartphone. Hence, if you want to gain memory space on your Samsung Galaxy then you need to clean WhatsApp history on Samsung Galaxy S3 Smartphone.
Apart from memory issues WhatsApp chat history is also responsible for slow speed of your Samsung Galaxy S3 and impacts the performance of WhatsApp application. This situation really irritates users and they starts looking how to delete WhatsApp chat history on Samsung Galaxy S3. If you are also a victim of this unpleasant situation then stop worrying and employ Remo MORE. This tool is famous as the best Samsung Galaxy S3 WhatsApp Chat history cleaner and can easily delete WhatsApp chat history in Samsung Galaxy S3 phone without much effort.
How to delete WhatsApp Chat History on Samsung Galaxy S3?
This is really a good question which stirs in the minds of many WhatsApp users. However, the answer of this question is produced by a set of veteran software developer in the form Remo MORE. Based on users feedback and my personal experience, I can easily say that Remo MORE is the best Samsung Galaxy S3 WhatsApp chat history cleaner compared to its contemporary counterparts. It comes with a nice user interface which makes it easy to handle. Remo MORE is completely free of cost and you can get it from internet in just a few simple clicks of mouse.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 388
|
NutriDay Gold tablets powerful antioxidant formulation which helps to promote growth and maintenance. This is the skin care products can do a lot for the health and appearance of your skin, including reducing the signs of aging. It promotes immunity; prevent anemia and leads to healthy life.
Pregnical tablets specially formulated to meet the demands of pregnant women which is important for building and keeping strong bones. And also treat all kind of bone problems during pregnancy and after delivery. It maintains healthy bones and teeth. Pregnical is used to prevent neural tube defect of a baby since the first trimester and also boosting the immune system, improving growth and brain development of the baby. So the antenatal mother can take Pregnical immediately after the confirmation of pregnancy.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,119
|
{"url":"http:\/\/mathhelpforum.com\/algebra\/104301-simplifying-help.html","text":"# Math Help - Simplifying help\n\n1. ## Simplifying help\n\nx^3-3x^2+3x-9 \/ x^4-81\n\ni cant seem to get this answer right\n\n2. assuming your function is ??\n\n$\\frac{x^3-3x^2+3x-9}{ x^4-81}$\n\nYou need to factor both the numerator and denominator to determine any cancelling expressions.\n\nThen the denominator will factor as follows $x^4-81 = (x^2)^2-9^2 = (x^2+9)(x^2-9) = (x^2+9)(x-3)(x+3)$\n\nThis can be further factored using complex numbers. Is this a requirement?\n\nDo you know the factor theorem? You will need it to factor the numerator. Try to find $f(a) = 0$ where a is an integer factor of 9.\n\n3. thanks,\n\nI got that far but how am i supposed to factor the numerator and how am i allowed to cancel out?\n\n4. $\\frac{x^3-3x^2+3x-9}{x^4-81} = \\frac{x^3-3x^2+3x-9}{(x^2+9)(x^2-9)} = \\frac{x^3-3x^2+3x-9}{(x^2+9)(x+3)(x-3)}$\n\nThis is just using the difference of squares method on the denominator, which is that ${a^2-b^2} = (a+b)(a-b)$\n\n$\\frac{x^2(x-3)+3(x-3)}{(x^2+9)(x+3)(x-3)} = \\frac{(x^2+3)(x-3)}{(x^2+9)(x+3)(x-3)} = \\frac{(x^2+3)}{(x^2+9)(x+3)}$\n\nThis is just using that ${ax+ay}={a(x+y)}$. You see how both ${x^3-3x^2}$ can be divided by ${x^2}$ and likewise ${3x-9}$ can be divided by 3? Apply that rule again on ${x^2(x-3)+3(x-3)}$, where $(x-3)$ is the a and $x^2$ and 3 are the x and y respectively. Then, I just divided both sides of the fraction by $(x-3)$.\n\n5. Originally Posted by samtheman17\nthanks,\n\nI got that far but how am i supposed to factor the numerator and how am i allowed to cancel out?\n\nThe numerato you can factor by grouping. If you group x^3 and -3x^2 together and 3x and -9 together. You take the common factor from each and get\n\nx^2(x-3) + 3(x-3)\n\nthen by the laws of grouping you can put them together and get (x^2 + 3)(x-3) and in the denominator you have (x-3)(x+3)(x^2 + 9) and then you can cross out the (x-3) on the top and bottom and you're left with:\n\n(x^2 + 3)\n__________\n(x+3)(x^2 + 9)\n\n6. thank you,\n\nthat helps so much","date":"2016-05-06 13:15:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 14, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8763700127601624, \"perplexity\": 410.6222944375247}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-18\/segments\/1461861812410.28\/warc\/CC-MAIN-20160428164332-00114-ip-10-239-7-51.ec2.internal.warc.gz\"}"}
| null | null |
{"url":"https:\/\/www.beatthegmat.com\/a-company-plans-to-assign-identification-numbers-to-its-t304194.html","text":"\u2022 Free Practice Test & Review\nHow would you score if you took the GMAT\n\nAvailable with Beat the GMAT members only code\n\n\u2022 1 Hour Free\nBEAT THE GMAT EXCLUSIVE\n\nAvailable with Beat the GMAT members only code\n\n\u2022 5 Day FREE Trial\nStudy Smarter, Not Harder\n\nAvailable with Beat the GMAT members only code\n\n\u2022 Get 300+ Practice Questions\n\nAvailable with Beat the GMAT members only code\n\n\u2022 5-Day Free Trial\n5-day free, full-access trial TTP Quant\n\nAvailable with Beat the GMAT members only code\n\n\u2022 Free Trial & Practice Exam\nBEAT THE GMAT EXCLUSIVE\n\nAvailable with Beat the GMAT members only code\n\n\u2022 Award-winning private GMAT tutoring\nRegister now and save up to $200 Available with Beat the GMAT members only code \u2022 FREE GMAT Exam Know how you'd score today for$0\n\nAvailable with Beat the GMAT members only code\n\n\u2022 Free Veritas GMAT Class\nExperience Lesson 1 Live Free\n\nAvailable with Beat the GMAT members only code\n\n\u2022 Magoosh\nStudy with Magoosh GMAT prep\n\nAvailable with Beat the GMAT members only code\n\n# A company plans to assign identification numbers to its\n\ntagged by: AAPL\n\n00:00\n\nA\n\nB\n\nC\n\nD\n\nE\n\n## Global Stats\n\nDifficult\n\nGMAT Prep\n\nA company plans to assign identification numbers to its employees. Each number is to consist of four different digits from 0 to 9, inclusive, except that the first digit cannot be 0. How many different identification numbers are possible?\n\nA. 3,024\nB. 4,536\nC. 5,040\nD. 9,000\nE. 10,000\n\nOA B.\n\n### GMAT\/MBA Expert\n\nGMAT Instructor\nJoined\n08 Dec 2008\nPosted:\n12121 messages\nFollowed by:\n1237 members\n5254\nGMAT Score:\n770\nAAPL wrote:\nGMAT Prep\n\nA company plans to assign identification numbers to its employees. Each number is to consist of four different digits from 0 to 9, inclusive, except that the first digit cannot be 0. How many different identification numbers are possible?\n\nA. 3,024\nB. 4,536\nC. 5,040\nD. 9,000\nE. 10,000\n\nOA B.\nTake the task of creating the identification numbers and break it into stages.\n\nStage 1: Select the first digit of the identification number\nThe first digit can be 1, 2, 3, 4, 5, 6, 7, 8, or 9\nSo, we can complete stage 1 in 9 ways\n\nStage 2: Select the second digit of the identification number\nThe second digit can be any digit from 0 to 9 OTHER THAN the digit chosen in stage 1\nSo, we can complete stage 2 in 9 ways .\n\nStage 3: Select the third digit of the identification number\nThe third digit can be any digit from 0 to 9 OTHER THAN the 2 digits digit chosen in stages 1 and 2\nSo we can complete this stage in 8 ways.\n\nStage 4: Select the fourth digit of the identification number\nThe third digit can be any digit from 0 to 9 OTHER THAN the 3 digits digit chosen earlier\nSo we can complete this stage in 7 ways.\n\nBy the Fundamental Counting Principle (FCP), we can complete all 4 stages (and thus create all of the identification numbers) in (9)(9)(8)(7) ways (= 4,536 ways)\n\n--------------------------\n\nNote: the FCP can be used to solve the MAJORITY of counting questions on the GMAT. For more information about the FCP, watch our free video: https:\/\/www.gmatprepnow.com\/module\/gmat-counting\/video\/775\n\nYou can also watch a demonstration of the FCP in action: https:\/\/www.gmatprepnow.com\/module\/gmat-counting\/video\/776\n\nThen you can try solving the following questions:\n\nEASY\n- https:\/\/www.beatthegmat.com\/counting-problem-company-recruitment-t244302.html\n- https:\/\/www.beatthegmat.com\/picking-a-5-digit-code-with-an-odd-middle-digit-t273110.html\n- https:\/\/www.beatthegmat.com\/permutation-combination-simple-one-t257412.html\n- https:\/\/www.beatthegmat.com\/simple-one-t270061.html\n\nMEDIUM\n- https:\/\/www.beatthegmat.com\/combinatorics-solution-explanation-t273194.html\n- https:\/\/www.beatthegmat.com\/arabian-horses-good-one-t150703.html\n- https:\/\/www.beatthegmat.com\/sub-sets-probability-t273337.html\n- https:\/\/www.beatthegmat.com\/combinatorics-problem-t273180.html\n- https:\/\/www.beatthegmat.com\/digits-numbers-t270127.html\n- https:\/\/www.beatthegmat.com\/doubt-on-separator-method-t271047.html\n- https:\/\/www.beatthegmat.com\/combinatorics-problem-t267079.html\n\nDIFFICULT\n- https:\/\/www.beatthegmat.com\/wonderful-p-c-ques-t271001.html\n- https:\/\/www.beatthegmat.com\/permutation-and-combination-t273915.html\n- https:\/\/www.beatthegmat.com\/permutation-t122873.html\n- https:\/\/www.beatthegmat.com\/combinations-t123249.html\n\nCheers,\nBrent\n\n_________________\nBrent Hanneson \u2013 Creator of GMATPrepNow.com\nUse our video course along with\n\nAnd check out all of our free resources\n\nGMAT Prep Now's comprehensive video course can be used in conjunction with Beat The GMAT\u2019s FREE 60-Day Study Guide and reach your target score in 2 months!\n\n### GMAT\/MBA Expert\n\nGMAT Instructor\nJoined\n09 Oct 2010\nPosted:\n520 messages\nFollowed by:\n25 members\n59\nAAPL wrote:\nGMAT Prep\n\nA company plans to assign identification numbers to its employees. Each number is to consist of four different digits from 0 to 9, inclusive, except that the first digit cannot be 0. How many different identification numbers are possible?\n\nA. 3,024\nB. 4,536\nC. 5,040\nD. 9,000\nE. 10,000\nImmediate application of the Multiplicative Principle:\n\n$\\begin{array}{*{20}{c}} {\\underline {{\\text{not}}\\,\\,0} } \\\\ 9 \\end{array}\\begin{array}{*{20}{c}} {\\underline {{\\text{nr}}} } \\\\ 9 \\end{array}\\begin{array}{*{20}{c}} {\\underline {{\\text{nr}}} } \\\\ 8 \\end{array}\\begin{array}{*{20}{c}} {\\underline {{\\text{nr}}} } \\\\ 7 \\end{array}\\,\\,\\,\\,\\,\\mathop \\Rightarrow \\limits^{{\\text{Multipl}}{\\text{.}}\\,{\\text{Principle}}} \\,\\,\\,\\,? = {9^2} \\cdot 8 \\cdot 7\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\left[ {nr = {\\text{no}}\\,\\,{\\text{repetition}}} \\right]$\n$\\left\\langle ? \\right\\rangle = \\left\\langle {{9^2}} \\right\\rangle \\cdot \\left\\langle {8 \\cdot 7} \\right\\rangle = 1 \\cdot 6 = 6\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\left[ {\\left\\langle N \\right\\rangle = {\\text{units}}\\,\\,{\\text{digit}}\\,\\,{\\text{of}}\\,\\,N} \\right]$\n\nJust one alternative choice with unit\u00b4s digit equal to the correct one... we are done!\n\nThis solution follows the notations and rationale taught in the GMATH method.\n\nRegards,\nFabio.\n\n_________________\nFabio Skilnik :: www.GMATH.net (Math for the GMAT)\nCourse release PROMO : finish our test drive till 30\/Sep with (at least) 60 correct answers out of 92 (12-questions Mock included) to gain a 70% discount!\n\n### GMAT\/MBA Expert\n\nGMAT Instructor\nJoined\n25 Apr 2015\nPosted:\n1467 messages\nFollowed by:\n13 members\n43\nAAPL wrote:\nGMAT Prep\n\nA company plans to assign identification numbers to its employees. Each number is to consist of four different digits from 0 to 9, inclusive, except that the first digit cannot be 0. How many different identification numbers are possible?\n\nA. 3,024\nB. 4,536\nC. 5,040\nD. 9,000\nE. 10,000\nThere are 9 choices for the first digit (1 through 9, inclusive). The second digit can be any of the 10 digits (0 through 9, inclusive) EXCEPT it can\u2019t repeat the first digit; thus, there are 9 options for the second digit. The third digit can\u2019t repeat either of the first two digits, so there are 8 options. Similarly, the fourth digit can\u2019t repeat any of the first 3 digits, so there are 7 options. Thus, the total number of options is 9 x 9 x 8 x 7 = 4,536.\n\n_________________\nScott Woodbury-Stewart Founder and CEO\n\n### Top First Responders*\n\n1 Jay@ManhattanReview 84 first replies\n2 Brent@GMATPrepNow 73 first replies\n3 fskilnik 50 first replies\n4 GMATGuruNY 37 first replies\n5 Rich.C@EMPOWERgma... 16 first replies\n* Only counts replies to topics started in last 30 days\nSee More Top Beat The GMAT Members\n\n### Most Active Experts\n\n1 fskilnik\n\nGMAT Teacher\n\n199 posts\n2 Brent@GMATPrepNow\n\nGMAT Prep Now Teacher\n\n166 posts\n3 Scott@TargetTestPrep\n\nTarget Test Prep\n\n118 posts\n4 Jay@ManhattanReview\n\nManhattan Review\n\n98 posts\n5 Max@Math Revolution\n\nMath Revolution\n\n95 posts\nSee More Top Beat The GMAT Experts","date":"2018-09-21 14:37:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2469707727432251, \"perplexity\": 5491.832557946886}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267157203.39\/warc\/CC-MAIN-20180921131727-20180921152127-00351.warc.gz\"}"}
| null | null |
«Шахматы Карпова» — российская компания, специализирующаяся на производстве эксклюзивных шахмат. Учредитель — двенадцатый чемпион мира по шахматам Анатолий Евгеньевич Карпов.
История
Компания «Шахматы Карпова» основана в 2003 году и занимается производством эксклюзивных шахматных наборов. В состав мастерской входят косторезы, скульпторы, специалисты по истории костюма. Для соблюдения исторической достоверности компания сотрудничает с историками из Московского университета.
Продукция
Компания «Шахматы Карпова» производит и продаёт эксклюзивные шахматные комплекты, аксессуары и сувениры. Профилирующим материалом является бивень шерстистого мамонта, используются и такие материалы, как кость динозавров, бронза, янтарь, серебро и ценные породы дерева. Шахматные наборы на исторические сюжеты из бивней мамонта с автографом и личной печатью Анатолия Карпова хранятся в различных музеях мира. По случаю конгресса международной организации шахматных коллекционеров в Мадриде был сделан единственный комплект шахмат в стиле Гауди, с фигурами в виде зданий, построенных великим испанским архитектором. «Страсти по Родену», величайшему французскому скульптору, и его «Мыслителя» — чёрного короля, в мастерской Карпова выполнили в двух экземплярах, один из которых был подарен президенту Франции Жаку Шираку.
Всего выпускается не более десяти экземпляров каждого набора. Срок изготовления — от двух недель до года. Один из самых дорогих наборов «Шахмат Карпова» стоит около 1 млн долларов.
Карпов-дизайн «Непобедимые»
Компания «Шахматы Карпова» имеет свой собственный дизайн шахмат — «Непобедимые», разработанный и запатентованный одним из основателей компании Константином Курченковым при участии Карпова. Это первый в мире, запатентованный русский дизайн шахмат. Идея состоит в том, что в каждую фигуру вмонтирован индивидуальный стабилизатор веса, позволяющий фигуре всегда оставаться в вертикальном положении. Каждая фигура представляет собой стилизованного «ваньку-встаньку» — ни одну нельзя уронить, что символизирует несгибаемость и упорство русского характера. Один из таких наборов был подарен Карповым известному футболисту Диего Марадоне. Другой такой набор в 2010 году Карпов вручил победителю турнира, посвящённого 100-летию Владаса Микенаса, гроссмейстеру Давиду Наваре.
Примечания
Ссылки
Официальный сайт компании
Шахматы от Карпова. Экс-чемпион мира выпускает уникальные наборы шахматных фигур // «Свой бизнес», № 52, Июль 2007.
Из бутика «Шахматы Карпова» похищены уникальные шахматные фигурки // РИАНовости, 21.12.2006
У Карпова увели коней // Газета.ру, 18.10.2010
Фадеева М. С новым ходом. Коллекционные «Шахматы Карпова» // Вечные ценности под новогодней ёлкой. «Известия», 22.12.2006.
Рыкова Е. Анатолий Карпов продаёт эксклюзивные шахматы // rb.ru, 12.02.2008
Левит А. Двенадцатый чемпион мира по шахматам Анатолий Карпов: «С Гарри Каспаровым я провел больше времени, нежели со своей женой…» // «Факты» (Одесса), 15.01.2008
Шахматное чудо: от автозапчастей до бриллиантов в платине // Шахматная школа «Этюд», 11.05.2010.
Шахматный набор «Аргонавты» компании «Шахматы Карпова»// РИАНовости, visualrian.ru, 1.04.2007. (Владимир Федоренко) I03-8647
// РИАНовости, visualrian.ru, 1.04.2007. (Владимир Федоренко) I03-8646
Шахматный набор «Война Алой и Белой розы» компании «Шахматы Карпова»// РИАНовости, visualrian.ru, 1.04.2007. (Владимир Федоренко) I03-8648
// РИАНовости, visualrian.ru, 23.03.2010. (Владимир Федоренко) 03-2793
Набор шахматных фигур «Неваляшка» («Непобедимые») // РИАНовости, visualrian.ru, 23.03.2010. (Владимир Федоренко) 03-2794
Производители спортивных товаров
Производители настольных игр
Шахматы
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,978
|
The PPF-250-P59XW series of encapsulated, industrial quality ac/dc power supplies deliver up to 250W output power. The units employ active power factor correction (PFC) to convert a universal ac-input voltage (95V to 264Vac) to 12Vdc/20A, 24Vdc/10A, 48Vdc/5A, 72Vdc/3.5A, 110Vdc/2.2A or 125Vdc/2.0A continuous. A DC-input version, which offers an 95-350Vdc input range, is also available. Typical conversion efficiency is 80% at full load.
The unit has large headroom and can be customized for up to 400W output power, depending on output voltage required. An optional built-in redundancy diode allows for parallel and N+1 operation.
The power factor is corrected to a minimum of 0.97 at full load for the entire input range in compliance with EN6100-3-2 for low input harmonic distortion. The unit is filtered to meet EN55022 EMI Class A with generous margins. Full electronic protection on the input and output eliminates the possibility of failure due to abnormal operating conditions, including application errors. The units are designed for compliance with EN/UL60950-1 and equivalent safety standards.
Designed for operation in extreme environments, the units are fully encapsulated with a thermally conductive MIL-grade silicon rubber compound with a UL94V-0 flammability rating. This ensures protection from high levels of shock and vibration, moisture and other contaminants. The power supplies also meet environmental criteria as stipulated in MIL-810C, D. Cooling is by conduction via a base plate to a heatsinking surface. The unit is designed for continuous operation at +70°C when installed on an appropriate size heat-sinking surface.
The chassis measures 146 x 64 x 191 mm and is suitable for applications with space constraints; it is a full 1.7 inches shorter than the similar 300W encapsulated version.
ABSOPULSE Electronics is an original equipment manufacturer (OEM) of an extensive range of fully encapsulated industrial and railway grade power conversion solutions designed for operation in extreme conditions. Our designs can be modified to meet customer application requirements. ABSOPULSE also offers fully custom solutions.
P59XW enclosure: 146 x 64 x 191 mm (5.8″ x 2.5″ x 7.5″ ).
Copyright © ABSOPULSE Electronics Ltd.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,128
|
Liste des sénateurs de la Haute-Loire
République
Balthazar Jacotin de 1876 à 1878
Edmond du Motier de La Fayette de 1876 à 1890
Ernest Vissaguet de 1879 à 1920
Clément Allemand de 1891 à 1900
Charles Dupuy de 1900 à 1923
Louis Devins de 1913 à 1917
Auguste Foulhy de 1920 à 1924
Francisque Enjolras de 1920 à 1933
Régis Martin-Binachon de 1924 à 1938
Édouard Néron de 1924 à 1940
Julien Fayolle de 1933 à 1935
Laurent Eynac de 1935 à 1940
Joseph Antier de 1938 à 1940
République
Jean de Lachomette de 1948 à 1959
Paul Chambriard de 1946 à 1959
République
Laurent Duplomb depuis 2017
Olivier Cigolotti (UDI) depuis 2015
Gérard Roche (DVD) de 2011 à 2017
Jean Boyer (Union centriste) de 2001 à 2014 (démission)
Adrien Gouteyron (UMP) de 1978 à 2011
Guy Vissac de 1998 à 2001
Régis Ploton de 1996 à 1998
Jean-Paul Chambriard de 1983 à 1996
René Chazelle de 1974 à 1983
Jean Proriol de 1974 à 1978
Robert Bouvard de 1959 à 1974
Jean de Lachomette de 1959 à 1974
Loire (Haute-)
Senateurs
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 318
|
\section{Introduction}
This note continues recent work in \cite{MSAS1} concerning certain families of polynomials connected with approximation in spaces of analytic functions, and orthogonal polynomials in weighted spaces. In the paper \cite{MSAS1}, we discussed the notion of {\it optimal approximants} to $1/f$ for a holomorphic function $f$ belonging to a Hilbert function space in $\mathbb{C}^n$, and pointed out connections with orthogonal polynomials in certain weighted spaces, with weight determined by the same target function $f$. We presented some elementary examples of optimal approximants and orthogonal polynomials in several variables, and to obtain concrete closed-form representations of these objects, we relied on one-variable results together with suitable transformations.
In this note, we present a further family of examples of weighted orthogonal polynomials and optimal approximants in several variables. We use a direct, elementary approach to go beyond cases that admit easy reduction to essentially one-variable problems. For simplicity, we focus on two variables, the target function $f=1-\frac{1}{\sqrt{2}}(z_1+z_2)$, and a scale of spaces of functions in the unit ball $\mathbb{B}^2=\{(z_1,z_2)\in\mathbb{C}^2\colon |z_1|^2+|z_2|^2<1\}$,
but some of our arguments potentially extend to higher dimensions, at the price of more cumbersome notation and more involved proofs.
We consider a scale of reproducing kernel Hilbert spaces that have recently featured in work of Richter and Sunkes \cite{RicSun16}. For further background on this kind of spaces, see for instance \cite{Zhu,CSW11,CHZ18} and the references therein. Fix $\gamma>0$ and let $\mathcal{H}_\gamma$ denote the reproducing kernel Hilbert space in $\mathbb{B}^d$ associated with the reproducing kernel
\[k_{\gamma}(z;w)=\frac{1}{\brkt{1-\ip{z,w}}^\gamma }, \quad z,w\in\mathbb{B}^d.\]
The $\mathcal{H}_{\gamma}$ include well-known spaces such as the {\it Drury-Arveson space} ($H^2_d=\mathcal{H}_1$), the {\it Hardy space} of $\mathbb{B}^2$ ($H^2(\partial \mathbb{B}_d)=\mathcal{H}_d$), and the {\it Bergman space} of the $2$-ball ($A^2(\mathbb{B}_d)=\mathcal{H}_{d+1}$). In two variables, the norm in $\mathcal{H}_{\gamma}$ of an analytic function $f=\sum_{m=0}^\infty \sum_{n=0}^\infty \hat{f}(m,n)z_1^m z_2^n$
can be expressed as
\begin{equation}
\norm{f}^2_\gamma =\sum_{m=0}^\infty \sum_{n=0}^\infty a_{m,n}\abs{\hat{f}(m,n)}^2,
\end{equation}
where
\begin{equation}
a_{m,n}=
\begin{cases}
1 & m=n=0,\\
\frac{m!n!}{(\gamma+m+n-1)\cdots(\gamma+1)\cdot \gamma} & \text{otherwise.}
\end{cases}
\end{equation}
We observe that polynomials are dense in all the $\mathcal{H}_{\gamma}$, monomials are orthogonal, and multiplication by the coordinate functions furnish bounded linear operators.
We now state the definition of optimal approximants; see \cite{Chui80,SecoSurvey,JentZeros19,BetalPrep2,MSAS1} for more comprehensive discussions and references.
Enumerating the monomials in two variables in some fixed way, we write $\chi_j$ for the $j$th monomial in this ordering, and set
$\mathcal{P}_n=\mathrm{span}\{\chi_j\colon j=0,\ldots, n\}$. In this note, we work with {\it degree lexicographic order}. Monomials are ordered by increasing total degree, and ties between two monomials of the same total degree are broken lexicographically. See \cite{GWsiam07,GeroJEMS14} and the references therein for background material. Explicitly, we have \[1\prec z_1 \prec z_2 \prec z_1^2\prec z_1z_2\prec z_2^2\prec z_1^3\prec z_1^2z_2\prec \cdots\,\, ,\]
so that $\chi_4=z_1z_2$, $\chi_5=z_2^2$, and so on. For pairs of natural numbers $(j,k)$ and $(m,n)$, we will take $(j,k)\prec (m,n)$ to signify that $z_1^jz_2^k\prec z_1^mz_2^n$.
\begin{definition}[Optimal approximants]
Let $f\in \mathcal{H}_{\gamma}$ be given. We define the $n$th order {\it optimal approximant} to $1/f$ in $\mathcal{H}_{\gamma}$, relative to $\mathcal{P}_n$, as
$p_n^{\ast}=\mathrm{Proj}_{f\cdot \mathcal{P}_n}[1]/f$,
where $\mathrm{Proj}_{f\cdot \mathcal{P}_n}\colon \mathcal{H}_{\gamma}\to f\cdot \mathcal{P}_n$ is the orthogonal projection onto the closed subspace $f\cdot \mathcal{P}_n \subset \mathcal{H}_{n}$.
\end{definition}
Given some $f\in \mathcal{H}_{\gamma}$, optimal approximants can be viewed as polynomial substitutes for the function $1/f$, the point being that $1/f$ may fall outside of $\mathcal{H}_{\gamma}$. Optimal approximants arise in several contexts, for instance cyclicity problems and filtering theory, see \cite{SecoSurvey,MSAS1}. The papers \cite{FMS14,JAM15,BetalPrep2} discuss some methods for computing optimal approximants, but closed formulas are only known in a few instances. Multi-variable examples have so far only been obtained as a consequence of one-variable results.
\begin{definition}[Weighted orthogonal polynomials]
Let $f\in \mathcal{H}_{\gamma}$ be fixed. We say that a sequence $\{\phi_j\}_{j \in \mathbb{N}} \subset \mathbb{C}[z_1,z_2]$ consists of {\it weighted orthogonal polynomials} with respect to $f$ if
$\{\phi_j\}$ is an orthogonal basis for the Hilbert space $\mathcal{H}_{\gamma,f}$ with inner product given by $\langle g,h\rangle_{\gamma,f}
\colon =\langle gf,hf \rangle_{\mathcal{H}_{\gamma}}$.
\end{definition}
There is an important connection between optimal approximants and orthogonal polynomials, as is explained in \cite{JLMS16,MSAS1}. Namely, if $\{p_n^*\}$ denote the optimal approximants to $1/f$, $f\in \mathcal{H}_{\gamma}$, and $\{\phi_n\}$ are orthogonal polynomials in the weighted space $\mathcal{H}_{\gamma,f}$, respectively, then
\begin{equation}
p_n^*(z)=\sum_{k=0}^n\langle 1, f\psi_k\rangle_{\mathcal{H}_{\gamma}} \psi_k(z),
\label{OAvsOGformula}
\end{equation}
where $\psi_k=\phi_k/\|\phi_k\|_{\gamma,f}$. This means that if we determine $\{\phi_k\}_k$ explicitly for some given weight $f$, then we also obtain formulas for the optimal approximants to $1/f$. Implementing this strategy in practice in $\mathcal{H}_{\gamma}$ and for the function $f=1-\frac{1}{\sqrt{2}}(z_1+z_2)$ is the goal of this note.
\section{A family of orthogonal polynomials}
We begin with an elementary lemma.
\begin{lem}\label{lem:winner_monoms}
Let $f(z_1,z_2)=1-a(z_1+z_2)$ and let $\mathcal{H}$ be a reproducing kernel Hilbert space in which the monomials are orthogonal. Consider $\mathcal{H}_f$, the space weighted by $f$ with inner product $\langle g, h\rangle_{\mathcal{H}_f} :=\langle gf , hf\rangle_{\mathcal{H}}$. For nonnegative integers $j,k,m,n$, we have
\begin{multline*}
\ip{z_1^j z_2^k,\,z_1^m z_2^n}_f=\\
\begin{cases}
\norm{z_1^j z_2^k}^2 + a^2 \norm{z_1^{j+1}z_2^k}^2 +a^2 \norm{z_1^j z_2^{k+1}}^2 &\quad
\mathrm{if}\;\;\parbox{3cm}{$m=j$, $n=k$,}\\ ~&~\\
-a\norm{z_1^j z_2^k}^2 &\quad
\mathrm{if}\;\;\parbox{3cm}{$m=j-1$, $n=k$, or \\$m=j$, $n=k-1,$}\\ ~&~\\
-a\norm{z_1^{j+1} z_2^k}^2 &\quad
\mathrm{if}\;\;\parbox{3cm}{$m=j+1$, $n=k$,}\\ ~&~\\
-a\norm{z_1^{j} z_2^{k+1}}^2 &\quad
\mathrm{if}\;\;\parbox{3cm}{$m=j$, $n=k+1$,}\\ ~&~\\
a^2\norm{z_1^{j+1} z_2^{k}}^2 &\quad
\mathrm{if}\;\;\parbox{3cm}{$m=j+1$,\\ $n=k-1$,}\\ ~&~\\
a^2\norm{z_1^{j} z_2^{k+1}}^2 &\quad
\mathrm{if}\;\;\parbox{3cm}{$m=j-1$,\\ $n=k+1$,}\\ ~&~\\
0 &\quad \mathrm{otherwise.}
\end{cases}
\end{multline*}
\end{lem}
\begin{proof}
This amounts to expanding the inner product and reading off terms.
\end{proof}
Recall the standard definition of the {\it Pochhammer symbol} for $\gamma$ real:
\[(\gamma)_n=\gamma\cdot (\gamma+1)\cdots (\gamma+n-1), \quad n\geq 0.\]
\begin{thm}\label{thm:closed_form}
In $\mathcal{H}_\gamma$, weighted by $f(z_1,z_2)=1-\frac{\sqrt{2}}{{2}}\brkt{z_1+z_2}$, let $\phi_{j,k}$ be the first orthogonal polynomial containing $z_1^j z_2^k$ (with respect to degree lexicographic order). Then $\phi_{j,k}$ has the form
\begin{equation}\label{eq:whichterms}
\phi_{j,k}(z_1,z_2) = \sum_{m=0}^j \sum_{n=0}^k \hat\phi_{j,k}(m,n) z_1^m z_2^n
\end{equation}
where the coefficients $\hat\phi_{j,k}(m,n)$ are given by
\begin{equation}\label{eq:closedcoeffs}
\hat\phi_{j,k}(m,n) = \brkt{\frac{\sqrt{2}}{2}}^{j+k-m-n}
\frac{(\gamma)_{m+n+1}}{(\gamma)_{j+k+1}}
\brkt{ \frac{j!k!}{m!n!} \cdot\frac{\brkt{j+k-m-n}!}{\brkt{j-m}!\brkt{k-n}!} }.
\end{equation}
Moreover,
\begin{align}
\norm{\phi_{j,k}}^2_f &= \frac{\gamma+j+k+1}{\gamma+j+k}\cdot \frac{j!k!}{(\gamma)_{j+k}}.
\label{eq:closednorms}
\end{align}
\end{thm}
\begin{proof}
We shall prove this using strong induction. First, $\phi_{0,0}(z_1,z_2)=1$, and by Lemma \ref{lem:winner_monoms},
\begin{align*}
\norm{\phi_{0,0}}^2_f
= \norm{1}^2_f
&= \norm{1}^2 + \brkt{\frac{\sqrt{2}}{2}}^2\norm{z}^2 + \brkt{\frac{\sqrt{2}}{2}}^2\norm{z}^2\\
&= 1 + \frac{1}{2\gamma} + \frac{1}{2\gamma}= \frac{\gamma+1}{\gamma}
\end{align*} as needed.
Now consider $\phi_{j,k}$ and assume that for all $(J,K)\prec(j,k)$, the polynomial $\phi_{J,K}$ has the desired form, coefficients, and norm. Using the Gram-Schmidt algorithm,
\begin{equation}\label{eq:gramschmidt}
\phi_{j,k}(z_1,z_2)= z_1^jz_2^k - \sum_{(J,K)\prec(j,k)} \frac{ \ipf{z_1^jz_2^k,\,\phi_{J,K}} }{\norm{\phi_{J,K}}_f^2}\phi_{J,K}.
\end{equation}
Each $\phi_{J,K}$ has the form \eqref{eq:whichterms}, and by Lemma \ref{lem:winner_monoms}, we see that there are only three $\phi_{J,K}$ with $(J,K)\prec(j,k)$ that give a non-zero inner product: $\phi_{j,k-1}$, $\phi_{j-1,k}$, and $\phi_{j+1,k-1}$. Noting that $\hat\phi_{J,K}(J,K)=1$ and applying Lemma \ref{lem:winner_monoms} gives that
\begin{align}
\ipf{z_1^jz_2^k,\,\phi_{j,k-1}} &=\ipf{z_1^jz_2^k,\,z_1^jz_2^{k-1}} = -\frac{\sqrt{2}}{2} \frac{j!k!}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma}\label{eq:iplessk}\\
\ipf{z_1^jz_2^k,\,\phi_{j-1,k}} &=\ipf{z_1^jz_2^k,\,z_1^{j-1}z_2^{k}} = -\frac{\sqrt{2}}{2} \frac{j!k!}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma}\label{eq:iplessj}\\
\ipf{z_1^jz_2^k,\,\phi_{j+1,k-1}}&=\ipf{z_1^jz_2^k,\, z_1^{j+1}z_2^{k-1} + \hat\phi_{j+1,k-1}(j,k-1)z_1^jz_2^{k-1}}\label{eq:cancels}\\
&=\ipf{z_1^jz_2^k,\,z_1^{j+1}z_2^{k-1}}\nonumber\\
&\qquad+ \hat\phi_{j+1,k-1}(j,k-1)\ipf{z_1^jz_2^k,\, z_1^jz_2^{k-1}}.\nonumber
\end{align}
The right hand side of \eqref{eq:cancels} is equal to zero: by Lemma \ref{lem:winner_monoms},
\begin{equation}
\ipf{z_1^jz_2^k,z_1^{j+1}z_2^{k-1}} = \frac{1}{2}\frac{(j+1)!k!}{(\gamma+j+1+k-1)\cdots(\gamma+1)\cdot \gamma},
\end{equation}
and by the inductive hypothesis about the norm of $\phi_{j+1,k-1}$ and Lemma \ref{lem:winner_monoms},
\begin{align}
\hat\phi_{j+1,k-1}(j,k-1)\ipf{z_1^jz_2^k,z_1^jz_2^{k-1}}
&= \frac{\sqrt{2}}{2} \frac{j+1}{\gamma+j+k}\cdot \brkt{-\frac{\sqrt{2}}{2}}\frac{j!k!}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma}\nonumber\\
&= -\frac{1}{2}\frac{(j+1)!k!}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma}.
\label{cancellation}
\end{align}
Because of this cancellation, which is the key to getting the form the form \eqref{eq:whichterms}, the only preceding orthogonal polynomials that contribute terms to $\phi_{j,k}$ are $\phi_{j,k-1}$ and $\phi_{j-1,k}$, so we have
\begin{align*}
\phi_{j,k}(z_1,z_2) &= z_1^jz_2^k - \frac{ \ipf{z_1^jz_2^k,\,\phi_{j,k-1}} }{\norm{\phi_{j,k-1}}_f^2}\phi_{j,k-1} - \frac{ \ipf{z_1^jz_2^k,\,\phi_{j-1,k}} }{\norm{\phi_{j-1,k}}_f^2}\phi_{j-1,k}\nonumber\\
&= z_1^jz_2^k +\frac{\sqrt{2}}{2} \frac{j!k!}{(\gamma)_{j+k}}\brkt{\frac{1}{\norm{\phi_{j,k-1}}_f^2}\phi_{j,k-1} + \frac{1}{\norm{\phi_{j-1,k}}_f^2}\phi_{j-1,k}}.\nonumber\\
\end{align*}
Using the inductive hypothesis about the norms and simplifying, we obtain
\begin{align}
\phi_{j,k}(z_1,z_2)= z_1^jz_2^k+\frac{\sqrt{2}}{2}\frac{1}{\gamma+j+k}\brkt{k\phi_{j,k-1}+j\phi_{j-1,k}}. \label{eq:recursive}
\end{align}
This recursive formula can now be used to recover individual coefficients $\hat\phi_{j,k}(m,n)$ using the coefficients $\hat\phi_{j,k-1}(m,n)$ and $\hat\phi_{j-1,k}(m,n)$. We know that $\hat\phi_{j,k}(j,k)=1$, and in the case where $m=j$ (or, similarly, where $n=k$) we have $\hat\phi_{j-1,k}(j,n)=0$ (similarly, $\hat\phi_{j,k-1}(m,k)=0$). Let us first consider the case where $m=j$ and $n=0,1,\dots,k-1$, noting that the case where $n=k$ and $m=0,1,\dots,j-1$ proceeds similarly:
\begin{align*}
\hat\phi_{j,k}(j,n)
&= \frac{\sqrt{2}}{2}\frac{1}{\gamma+j+k} \brkt{k\hat\phi_{j,k-1}(j,n) +j \hat\phi_{j-1,k}(j,n) } \\
&= \frac{\sqrt{2}}{2}\frac{1}{\gamma+j+k}\brkt{\frac{\sqrt{2}}{2}}^{j+k-1-j-n}
\frac{(\gamma+j+n)\cdots(\gamma+1)\cdot \gamma}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma} \\
&\qquad\cdot\left(
k \brkt{\frac{j!(k-1)!}{j!n!} \frac{\brkt{j+k-1-j-n}!}{\brkt{j-j}!\brkt{k-1-n}!}}
\right) \\
&= \frac{\sqrt{2}}{2}^{k-n}\frac{(\gamma+j+n)\cdots(\gamma+1)\cdot \gamma}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma}
\cdot\frac{k!}{n!}\cdot
\frac{\brkt{k-1-n}!}{\brkt{k-1-n}!} \\
&= \frac{\sqrt{2}}{2}^{k-n}\frac{(\gamma+j+n)\cdots(\gamma+1)\cdot \gamma}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma}
\cdot\frac{k!}{n!}
\end{align*}
and this is what is obtained from substituting $m=j$ in \eqref{eq:closedcoeffs}.
Now we consider the case where $n<k$ and $m<j$:
\begin{align*}
\hat\phi_{j,k}(m,n)
&= \frac{\sqrt{2}}{2}\frac{1}{\gamma+j+k} \brkt{k\hat\phi_{j,k-1}(m,n) +j \hat\phi_{j-1,k}(m,n) } \\~\\
&= \frac{\sqrt{2}}{2}\frac{1}{\gamma+j+k}\brkt{\frac{\sqrt{2}}{2}}^{j+k-1-m-n}
\frac{(\gamma+m+n)\cdots(\gamma+1)\cdot \gamma}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma} \\
&\qquad\cdot\left(
k \brkt{\frac{j!(k-1)!}{m!n!} \frac{\brkt{j+k-1-m-n}!}{\brkt{j-m}!\brkt{k-1-n}!}}\right. \\
&\left.\qquad\qquad+
j\brkt{\frac{(j-1)!k!}{m!n!} \frac{\brkt{j-1+k-m-n}!}{\brkt{j-1-m}!\brkt{k-n}!}}
\right) \\~\\
&= \frac{\sqrt{2}}{2}^{j+k-m-n}\frac{(\gamma+m+n)\cdots(\gamma+1)\cdot \gamma}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma}
\cdot\frac{j!k!}{m!n!} \\
&\qquad\cdot\left(
\frac{\brkt{j+k-1-m-n}!(k-n)+\brkt{j-1+k-m-n}!(j-m)}{\brkt{j-m}!\brkt{k-n}!}
\right) \\~\\
&= \brkt{\frac{\sqrt{2}}{2}}^{j+k-m-n}
\frac{(\gamma+m+n)\cdots(\gamma+1)\cdot \gamma}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma} \cdot\frac{j!k!}{m!n!} \cdot
\frac{\brkt{j+k-m-n}!}{\brkt{j-m}!\brkt{k-n}!},
\end{align*}
as needed.
All that remains is to establish \eqref{eq:closednorms}. We use the recursive form \eqref{eq:recursive} and expand the inner product:
\begin{align*}
\ipf{\phi_{j,k},\,\phi_{j,k}}
&= \ipf{z_1^jz_2^k,z_1^jz_2^k} + \frac{\sqrt{2}}{2} \frac{k}{\gamma+j+k}\ipf{z_1^jz_2^k,\phi_{j,k-1}} \\
&\quad+ \frac{\sqrt{2}}{2} \frac{j}{\gamma+j+k}\ipf{z_1^jz_2^k,\phi_{j-1,k}} + \frac{\sqrt{2}}{2} \frac{k}{\gamma+j+k}\ipf{\phi_{j,k-1},z_1^jz_2^k}\\
&\quad+ \frac{\sqrt{2}}{2} \frac{j}{\gamma+j+k}\ipf{\phi_{j-1,k},z_1^jz_2^k} +\frac{1}{2}\frac{k^2}{(\gamma+j+k)^2}\normf{\phi_{j,k-1}}^2 \\
&\quad+ \frac{1}{2}\frac{kj}{(\gamma+j+k)^2}\ipf{\phi_{j,k-1},\phi_{j-1,k}} +\frac{1}{2}\frac{jk}{(\gamma+j+k)^2}\ipf{\phi_{j,k-1},\phi_{j-1,k}}\\
&\quad+ \frac{1}{2}\frac{j^2}{(\gamma+j+k)^2}\normf{\phi_{j-1,k}}^2.
\end{align*}
Substituting the inductive values of the norms, \eqref{eq:iplessk}, \eqref{eq:iplessj}, and recalling that the $\phi$ are orthogonal, we obtain
\begin{align*}
\ipf{\phi_{j,k},\,\phi_{j,k}}&= \frac{j!k!}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma} \\&\quad+ \frac{1}{2}\frac{(j+1)!k!}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma}\\&\quad+ \frac{1}{2}\frac{j!(k+1)!}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma} \\
&\quad+\sqrt{2} \frac{k}{\gamma+j+k}\brkt{-\frac{\sqrt{2}}{2} \frac{j!k!}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma}} \\
&\quad+\sqrt{2} \frac{j}{\gamma+j+k}\brkt{-\frac{\sqrt{2}}{2} \frac{j!k!}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma}}\\
&\quad+ \frac{1}{2}\frac{k^2}{(\gamma+j+k)^2}\frac{\gamma+j+k}{(\gamma+j+k-1)}\frac{j!(k-1)!}{(\gamma+j+k-1-1)\cdots(\gamma+1)\cdot\gamma)}\\
&\quad+ \frac{1}{2}\frac{j^2}{(\gamma+j+k)^2}\frac{\gamma+j+k}{(\gamma+j+k-1)}\frac{(j-1)!k!}{(\gamma+j-1+k-1)\cdots(\gamma+1)\cdot\gamma)},
\end{align*}
and simplifying yields
\begin{align*}
\ipf{\phi_{j,k},\,\phi_{j,k}} &= \frac{j!k!}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma} +\frac{j!k!}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma}\brkt{\frac{j+1}{2}+\frac{k+1}{2}}\\
&\quad+ \frac{j!k!}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma}\brkt{-k-j} +\frac{j!k!}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma}\brkt{\frac{j}{2}+\frac{k}{2}}\\
&= \frac{j!k!}{(\gamma+j+k)\cdots(\gamma+1)\cdot \gamma}\brkt{\gamma+j+k+\frac{j+1}{2}+\frac{k+1}{2}-j-k+\frac{j}{2}+\frac{k}{2}}\\
&=\frac{j!k!}{(\gamma+j+k-1)\cdots(\gamma+1)\cdot \gamma}\cdot\frac{\gamma+j+k+1}{\gamma+j+k}.
\end{align*}
\end{proof}
\begin{cor} \label{cor:recursive}
The orthogonal polynomials given in Theorem \ref{thm:closed_form} can be written recursively as
\begin{equation*}\label{eq:recursive}
\phi_{j,k} = z^j w^k + \frac{\sqrt{2}}{2} \frac{1}{\gamma+j+k} \brkt{k\phi_{j,k-1} +j \phi_{j-1,k} }.
\end{equation*}
\end{cor}
\section{A family of optimal approximants}
Making use of the formula \eqref{OAvsOGformula}, we obtain information about optimal approximants to $1/(1-\frac{1}{\sqrt{2}}(z_1+z_2))$. We again set $\psi_{j,k}=\phi_{j,k}/\|\phi_{j,k}\|_{\gamma,f}$.
\begin{lemma}\label{lemma:OAcoeffs}
Let $\gamma>0$ be fixed. Then for $j,k\in \mathbb{N}$,
\[\langle 1, f\psi_{j,k}\rangle_{\gamma}\psi_{j,k}=\frac{\hat{\phi}_{j,k}(0,0)}{\|\phi_{j,k}\|^2}\phi_{j,k}=\left(\frac{\sqrt{2}}{2}\right)^{j+k}\frac{(j+k)!}{j!k!}\frac{\gamma}{\gamma+j+k+1}\phi_{j,k}.\]
\end{lemma}
\begin{proof}
From the power series expression for the norm in $\mathcal{H}_{\gamma}$, we have $\langle 1, f\psi_{j,k}\rangle_{\gamma}=\overline{(f\psi_{j,k})}(0)=\overline{\psi}_{j,k}(0,0)=\overline{\hat{\psi}_{j,k}(0,0)}$, and by definition, $\hat{\psi}_{j,k}(0,0)=\hat{\phi}_{j,k}(0,0)/\|\phi_{j,k}\|_{\gamma,f}$ which is real by \eqref{eq:closedcoeffs}.
It remains to compute
\[\phi_{j,k}(0,0)=\left(\frac{\sqrt{2}}{2}\right)^{j+k}\frac{\gamma}{(\gamma)_{j+k+1}}(j+k)!\]
and, simplifying, we obtain
\[\frac{\phi_{j,k}(0,0)}{\|\phi_{j,k}\|_{\gamma}^2}=\left(\frac{\sqrt{2}}{2}\right)^{j+k}\frac{(j+k)!}{j!k!}\frac{\gamma}{\gamma+j+k+1}.\]
\end{proof}
Setting $\Phi_{j,k}=\sum_{n=0}^j\sum_{m=0}^k\hat{\Phi}_{j,k}(m,n)z_1^mz_2^n$
where
\begin{multline}
\hat{\Phi}_{j,k}(m,n)\\=\left(\frac{\sqrt{2}}{2}\right)^{2(j+k)-m-n}\gamma \frac{(j+k)!}{m!n!}\frac{(\gamma)_{m+n+1}}{(\gamma)_{j+k+2}}\frac{(j+k-m-n)!}{(j-m)!(k-n)!}
\end{multline}
a representation formula for optimal approximants follows from Lemma \ref{lemma:OAcoeffs}:
\begin{thm}
For $\gamma>0$ fixed, we have
\[p_n^*(z_1,z_2)=\sum_{(j,k)\preceq (n_1,n_2)}\Phi_{j,k}(z_1,z_2)\]
where $(n_1,n_2)$ is the bidegree of the polynomial $p_n^*$.
\end{thm}
Explicitly, then,
\[p_0^*=\Phi_{0,0}, \quad p_1^*=\Phi_{0,0}+\Phi_{1,0}, \quad p_2^*=\Phi_{0,0}+\Phi_{1,0}+\Phi_{0,1},\]
\[p_3^*=\Phi_{0,0}+\Phi_{1,0}+\Phi_{0,1}+\Phi_{2,0}\quad p_4^*=\Phi_{0,0}+\Phi_{1,0}+\Phi_{0,1}+\Phi_{2,0}+\Phi_{1,1},\]
and so on. Some $p_n^*$'s for $\gamma=1$ (the Drury-Arveson space) are written out in \cite[Section 6.1]{MSAS1}. The first few optimal approximants for the Hardy space $H^2(\mathbb{B}^2)$ ($\gamma=2)$ are as follows:
\[p_0^*=\frac{2}{3} ,\quad p_1^*=\frac{3}{4}+\frac{1}{4}\sqrt{2}z_1,\quad p_2^*=\frac{5}{6}+\frac{\sqrt{2}}{4}(z_1+z_2),\quad p^*_3=\frac{17}{20}+\frac{3\sqrt{2}}{10}z_1+\frac{\sqrt{2}}{4}+\frac{1}{5}z_1^2,\]
\[p_4^*=\frac{53}{60}+\frac{7\sqrt{2}}{20}z_1+\frac{3\sqrt{2}}{10}+\frac{1}{5}z_1^2+\frac{2}{5}z_1z_2,\]
while the first optimal approximants in the Bergman space $A^2(\mathbb{B}^2)$ ($\gamma=3$) have the form
\[p_0^*=\frac{3}{4} ,\, p_1^*=\frac{33}{40}+\frac{3\sqrt{2}}{10}z_1,\, p_2^*=\frac{9}{10}+\frac{3\sqrt{2}}{10}(z_1+z_2),\, p^*_3=\frac{73}{80}+\frac{7\sqrt{2}}{20}z_1+\frac{3\sqrt{2}}{10}z_2+\frac{1}{4}z_1^2, \]
\[p_4^*=\frac{15}{16}+\frac{2\sqrt{2}}{5}z_1+\frac{7\sqrt{2}}{20}z_2+\frac{1}{4}z_1^2+\frac{1}{2}z_1z_2.\]
The symmetric form of $p_2^*$ above is explained in \cite[Section 6]{MSAS1}.
\section{An application}
Our results can be applied to study the cyclicity properties of the function $f=1-\frac{1}{\sqrt{2}}(z_1+z_2)$. Recall that $f$ is said to be {\it cyclic} in $\mathcal{H}_{\gamma}$ if the closure of the invariant subspace $\mathrm{span}\{z_1^jz_2^kf\colon j,k \in \mathbb{N} \}$ equals $\mathcal{H}_{\gamma}$.
Define the {\it optimal distance} $\nu^2_{n}(f,\mathcal{H}_{\gamma})=\|p_n^*f-1\|_{\mathcal{H}_{\gamma}}^2$: then $f$ is cyclic if and only if $\nu_{n}(f,\mathcal{H}_{\gamma}) \to 0$ as $n\to \infty$.
Combining \cite[Corollary 5.3]{JLMS16} with our explicit formulas, we obtain the following.
\begin{corollary}
We have
\[\nu^2_{n}(f,\mathcal{H}_{\gamma})=1-\gamma^2\sum_{(j,k)\prec (n_1,n_2)}2^{-(j+k)}\frac{(j+k)!}{(\gamma)_{j+k+2}}\left(\begin{array}{c}j+k\\j\end{array}\right),\]
where $(n_1,n_2)$ is the bidegree of $p_n^*$.
\end{corollary}
The function $f$ was already known to be cyclic in all $\mathcal{H}_{\gamma}$, but the above gives a precise description of how quickly the finite-dimensional subspaces $f\cdot \mathcal{P}_n$ fill up $\mathcal{H}_{\gamma}$. (The trick used to prove \cite[Proposition 23]{MSAS1} combined with \cite[Proposition 3.10]{FMS14} applied to the weight sequence
$\omega(k)=k!/(\gamma)_k\asymp k^{\gamma-1}$ shows that the optimal distances have power law decay with exponent $-\gamma$.)
\section{Closing remarks}
As was highlighted in the course of the proof of Theorem \ref{thm:closed_form}, the cancellation in \eqref{cancellation} simplifies the structure of the orthogonal polynomials, giving rise to a relatively simple recursive relation that in turn allows us to write down an explicit formula for their coefficients; this phenomenon of course reflects the fact that the target function $f=1-\frac{1}{\sqrt{2}}(z_1+z_2)$ is well-adapted to the structure of $\mathcal{H}_{\gamma}$ (viz. also \cite[Proposition 23]{MSAS1}).
In \cite{MSAS1}, optimal approximants to $1/f$ for the similar function $f=1-\frac{1}{2}(z_1+z_2)$ were examined for the family of Dirichlet-type spaces $\mathfrak{D}_{\alpha}$ on the bidisk
$\mathbb{D}^2=\{(z_1,z_2)\in \mathbb{C}^2\colon |z_1|<1, |z_2|<1\}$, as were the corresponding orthogonal polynomials. While an analog of Lemma \ref{lem:winner_monoms} holds in that setting, cancellation fails for the orthogonal polynomials. Indeed, as is pointed out in \cite[Section 6]{MSAS1}, coefficients appearing in the orthogonal polynomials and optimal approximants in $\mathfrak{D}_{\alpha}$ in the bidisk exhibit sign changes and other complications, suggesting that obtaining a closed formula as well as precise estimates on optimal distances might be a harder problem than for the ball.
Returning to $\mathbb{B}^2$, we note that an analog of Lemma \ref{lem:winner_monoms} for the target function $g=\left(1-\frac{1}{\sqrt{2}}(z_1+z_2)\right)^2$, and indeed for other powers of $f$, is readily obtained. One can then proceed as we have done here in order to analyze orthogonal polynomials associated with the weight $g$. The computations quickly become more involved, but in principle one could attempt to obtain a recursive relation analogous to that in Corollary \ref{cor:recursive}, and then extract a closed formula for coefficients of orthogonal polynomials. As a sample, we invite the reader to verify that for $\gamma=1$ (the Drury-Arveson space), the orthogonal polynomials for the weight $g=f^2$ satisfy the relation
\begin{multline}
\phi_{j,k}=z_1^jz_2^k+\frac{\sqrt{2}}{j+k+2}(k\phi_{j,k-1}+j\phi_{j-1,k})\\-\frac{1}{(j+k+1)(j+k+2)}\left(\frac{k(k-1)}{2}\phi_{j,k-2}+jk\phi_{j-1,k-1}+\frac{j(j-1)}{2}\phi_{j-2,k}\right).
\end{multline}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,516
|
package com.checkpoint.andela.note;
import android.content.Intent;
import android.graphics.drawable.Drawable;
import android.os.Build;
import android.support.design.widget.FloatingActionButton;
import android.view.MenuItem;
import android.view.View;
import com.checkpoint.andela.model.NoteModel;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.robolectric.Robolectric;
import org.robolectric.RobolectricGradleTestRunner;
import org.robolectric.Shadows;
import org.robolectric.annotation.Config;
import org.robolectric.fakes.RoboMenuItem;
import org.robolectric.shadows.ShadowActivity;
import java.util.ArrayList;
import static junit.framework.Assert.assertEquals;
import static junit.framework.Assert.assertNotNull;
/**
* Created by andela on 05/02/2016.
*/
@Config(constants = BuildConfig.class, sdk = Build.VERSION_CODES.LOLLIPOP)
@RunWith(RobolectricGradleTestRunner.class)
public class ApplicationTest {
private Application application;
@Before
public void setUp() throws Exception {
application = Robolectric.setupActivity(Application.class);
}
@Test
public void testOnCreateView() throws Exception {
View view = application.findViewById(R.id.drawer_layout);
assertNotNull(view);
}
@Test
public void testPopulateNote() throws Exception {
ArrayList<NoteModel> testList = new ArrayList<>();
application.populateNote(testList, "n");
assertNotNull(testList);
}
@Test
public void testDrawableIcons() throws Exception {
Drawable checkmark = application.getResources().getDrawable(R.drawable.checkmarkw);
assertNotNull(checkmark);
Drawable trash = application.getResources().getDrawable(R.drawable.trash_blue);
assertNotNull(trash);
}
@Test
public void testOnCreateOptionsMenu() throws Exception {
MenuItem setting = new RoboMenuItem(R.menu.notes);
assertNotNull(setting);
MenuItem help = new RoboMenuItem(R.menu.notes);
assertNotNull(help);
}
@Test
public void testOnOptionsItemSelected() throws Exception {
Application activity = Robolectric.setupActivity(Application.class);
MenuItem menuItem = new RoboMenuItem(R.id.action_settings);
assertEquals(activity.onOptionsItemSelected(menuItem),true );
}
@Test
public void testOnNavigationItemSelected() throws Exception {
Application activity = Robolectric.setupActivity(Application.class);
MenuItem menuItem = new RoboMenuItem(R.id.nav_trash);
assertEquals(activity.onNavigationItemSelected(menuItem),true );
}
@Test
public void testLaunchTrashedActivity() throws Exception {
MenuItem trashedNote = new RoboMenuItem(R.id.nav_trash);
application.onNavigationItemSelected(trashedNote);
ShadowActivity thisActivity = Shadows.shadowOf(application);
Intent intent = thisActivity.peekNextStartedActivity();
assertEquals(TrashedNote.class.getCanonicalName(), intent.getComponent().getClassName());
}
@Test
public void testOnFloatActionButtonClick() throws Exception {
FloatingActionButton fab = (FloatingActionButton) application.findViewById(R.id.fab);
fab.performClick();
ShadowActivity thisActivity = Shadows.shadowOf(application);
Intent intent = thisActivity.peekNextStartedActivity();
assertEquals(NewNote.class.getCanonicalName(), intent.getComponent().getClassName());
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 360
|
Commissaire may refer to:
Commissary, a state official in the police or armed forces
Commissaire de police, in the French National Police
Commissaire des guerres, in the French Army
Commissaire (cycling), an official in competitive cycling
See also
Commissioner
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,419
|
A translation is a type of transformation. Other transformations include reflections, rotations, and dilations.
The result of a transformation is called the image. The original figure is called the pre-image.
Click on the TRANSLATION button on the left.
Click and drag the arrow head and observe what happens.
Click and drag the pre-image (the red triangle) and observe what happens.
Click and drag a vertex of the pre-image and observe what happens.
After you have explored several translations, answer the following questions in your math journal. Use the words pre-image and image in your responses.
When you dragged the arrow head, how did the image compare to the pre-image? What stayed the same? What changed?
When you dragged the pre-image, how did the image compare to the pre-image? What stayed the same? What changed?
How did dragging the pre-image compare to dragging the arrow head? How were these translations alike? How were they different?
When you dragged a vertex of the pre-image, how did the image compare to the pre-image? What stayed the same? What changed?
A translation is a transformation the changes the ________________ of a figure. A translation does not change the figure's _________ or ________________.
The result of a translation is called the ________. The ____________ figure is called the pre-image.
For any translation, the image and pre-image are ______________.
Use a capital letter for each vertex of the pre-image. For example, a triangle could have vertices at points A, B, and C.
Use corresponding capital letters to identify corresponding vertices of the image, adding a prime symbol ('). For example, the image of the triangle with vertices A, B, and C would have vertices A', B', and C'.
Watch this video to observe the translation of a right triangle.
Summarize the two methods used in the video for translating the triangle.
How did the x coordinates of the image compare to the corresponding x coordinates of the pre-image?
How did the y-coordinates of the image compare to the corresponding y coordinates of the pre-image?
The translation in the video of 9 units to the right and 4 units down could be represented using the rule (x + 9, y – 4). Describe the translation represented by the rule (x – 2, y + 3).
Click on the ACTIVITIES button at the top of the screen.
Follow the directions for "Playing with Translations" that appear on the right side of the screen.
Check the box near the bottom left to turn the axes on.
Click on either the left or right arrow near the top right of the screen. Follow the directions for "Hitting a Target" that appear on the right side of the screen.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,180
|
\section{Introduction}
Deep learning has lead to major breakthroughs in many recognition tasks as well as natural language processing~\cite{Lecun2015} thanks to efficient algorithms, a gigantic amount of available data, and powerful computing resources. Today's consumer products that we use on a daily basis such as smartphones and digital assistants are equipped with applications powered by deep neural networks (DNNs). However, training a successful DNN is not trivial. Algorithms used in training a DNN may be patented or have restricted licenses. Collecting and labeling data is costly, and GPU-accelerated computing hardware is also expensive. For example, the ImageNet dataset~\cite{ILSVRC15} contains about 1.2 million images, and training a DNN for an image classification model on such a dataset will take days and weeks even on GPU-accelerated machines. Therefore, production-level trained DNN models have great business value, and the need to protect models from copyright infringement is an urgent issue.
Moreover, conventional platforms for sharing models such as Model Zoo~\cite{jia2014caffe}, Azure AI Gallery~\cite{azureai}, and Tensor Hub~\cite{tensorhub} allow us to share DNN models for research and development purposes. A recent security assessment of on-device models from Android applications showed that many mobile applications fine-tuned pre-trained models from Tensor Hub~\cite{huang2021robustness}. Since models from such sharing platforms are widely used in real-world applications, it is necessary to properly credit DNN model owners to protect intellectual property (IP).
\RA{There are two aspects of IP protection for DNN models: access control and ownership verification.
The former focuses on protecting the functionality of DNN models from unauthorized access~\cite{chen2018protect, pyone2020training}, and the latter addresses ownership verification by taking inspiration from digital watermarking.
In this paper, we focus on ownership verification of DNN models.}
Researchers have proposed various model watermarking methods~\cite{2017-ICMR-Uchida, 2018-USENIX-Yossi,2018-ACCCS-Zhang,2018-Arxiv-Rouhani,2019-NIPS-Fan, 2019-MIPR-Sakazawa, 2020-NCA-Le}. However, most of the existing DNN watermarking methods are not robust against piracy attacks as described in~\cite{wang2019attacks,li2019piracy}.
Therefore, in this paper, we propose a DNN watermarking method that uses a block-wise image transformation with a secret key. The proposed method has been inspired by adversarial defenses~\cite{2020-ICIP-Maung, maung2021block}, which were in turn inspired by perceptual image encryption methods, which were proposed for privacy-preserving machine learning~\cite{2018-ICCETW-Tanaka, 2019-ICIP-Warit, 2019-Access-Warit, kawamura2020privacy} and encryption-then-compression systems~\cite{2019-TIFS-Chuman, 2019-APSIPAT-Warit, 2017-IEICE-Kurihara, chuman2017security}. The underlying idea of the proposed method is to embed watermark patterns in models by training the models with both plain images and transformed ones. Ownership is verified by matching the prediction of plain images and that of transformed ones. In experiments, the performance of protected models is close to that of non-protected ones, and the proposed method is also demonstrated to be robust against fine-tuning and model pruning attacks.
\section{Related Work\label{sec:related-work}}
\subsection{DNN Model Watermarking}
Inspired by digital watermarking, researchers have proposed various methods for preventing the illegal distribution of DNN models. There are mainly two approaches in DNN model watermarking: white-box and black-box.
White-box approaches require access to model weights for embedding and detecting watermarks in a DNN model. These methods use an embedding regularizer, which is an additional regularization term in a loss function during training~\cite{2017-ICMR-Uchida,nagai2018digital,2018-Arxiv-Chen, 2018-Arxiv-Rouhani}. A recent study~\cite{wang2019attacks} showed that these regularizer-based methods can be attacked. Another paper~\cite{2019-NIPS-Fan} highlighted that if watermarks are independent of a model's performance, they are vulnerable to ambiguity attacks~\cite{1998-IEEEJSAC-Craver} where two watermarks can be extracted from a protected model, causing confusion regarding ownership. Therefore, they introduced passports and passport layers~\cite{2019-NIPS-Fan}. However, a recent paper~\cite{li2019piracy} pointed out that ownership verification can be broken by using reverse-engineered secret passport weights. Accordingly, these white-box approaches are not practical in real-world applications such as online services because access to the model weights from a plagiarized party is not supported.
In black-box approaches, watermarks are extracted by observing the input and output of a model. A study in~\cite{2020-NCA-Le} introduced a black-box method by using adversarial examples. Another study in~\cite{2018-USENIX-Yossi} implanted a backdoor in a model so that a watermark can be triggered through the backdoor. Generally, in black-box approaches, a special set of training examples is used so that watermarks are extracted from the inference of a model~\cite{2018-ACCCS-Zhang,2019-NIPS-Fan, 2019-MIPR-Sakazawa, 2020-NCA-Le}. Li et al.\ pointed out that backdoor attack-based methods can be defeated by existing backdoor defenses (e.g.~\cite{wang2019neural}), and most of the existing methods are not robust enough against piracy attacks, where a verifiable watermark is injected into a model while maintaining the model's accuracy as described in~\cite{li2019piracy}.
Accordingly, we propose a DNN watermarking method that uses learnable transformed images with a secret key, in which the original watermark cannot be removed by piracy attacks. Similar to our work, Li et al.\ proposed a method called ``null embedding,'' which embeds a pattern into a model's decision process during the model's initial training~\cite{li2019piracy}. However, the effectiveness of their method has not been confirmed yet on large networks such as residual networks~\cite{2016-CVPR-He}, which are widely used for image classification tasks. In addition, the techniques used for transforming images are different; the proposed method uses a block-wise learnable transformation, in contrast to a null embedding pattern in~\cite{li2019piracy}.
\subsection{Learnable Image Encryption}
Learnable image encryption perceptually encrypts images while maintaining a network's ability to learn the encrypted ones for classification tasks. Most early methods of learnable image encryption were originally proposed to visually protect images for privacy-preserving DNNs~\cite{2018-ICCETW-Tanaka,madono2020block,2019-Access-Warit,2019-ICIP-Warit,sirichotedumrong2020gan,ito2020image,ito2020framework}.
Recently, adversarial defenses in~\cite{2020-ICIP-Maung,maung2021block} also utilized learnable image encryption methods. Instead of protecting visual information, these works focus on controlling a model's decision boundary with a secret key so that adversarial attacks are not effective on such models trained by learnable transformed images.
Another use case of learnable image encryption is the model protection proposed in~\cite{pyone2020training}. The study in~\cite{pyone2020training} focused on protecting a model from a functional perspective rather than ownership verification. In other words, a distributed model without a secret key is not usable. In this paper, a block-wise image transformation is applied to DNN watermarking for the first time.
\section{Threat Model\label{sec:threat}}
We consider an application scenario with two parties: owner $O$ and attacker $A$, as shown in Fig.~\ref{fig:threat}. Owner $O$ trains model $f$ with the proposed watermarking. Attacker $A$ illegally obtains model $f$ and establishes new model $f'$ with or without some modification to $f$. Both parties offer the same service via an application programming interface (API). When the model is in dispute, owner $O$ provides his/her secret key $K$ to an inspector, and the ownership is verified by using secret key $K$ through inference. We aim to verify the ownership of models by using secret key $K$ under this scenario.
\begin{figure}[h]
\centering
\includegraphics[width=.8\linewidth]{threat}
\caption{Application scenario of proposed DNN watermarking\label{fig:threat}}
\end{figure}
There are two common ways of modifying models: pruning and fine-tuning. An attacker may use these methods to destroy watermarks in watermarked models.
\textbf{Fine-tuning:} Fine-tuning (transfer learning)~\cite{2015-ICLR-Simonyan} trains a model on top of pre-trained weights. Since fine-tuning alters the weights of a model, an attacker may use fine-tuning as an attack to overwrite a protected model with the intent of forging watermarks. We can consider such an attack scenario where the adversary has a subset of dataset $\mathcal{D}'$ and retrains the model with a forged key ($K'$).
\textbf{Pruning:} DNN models are often over-parameterized and contain millions of parameters. These giant models cannot be directly deployed on devices with limited resources such as smartphones, digital assistants, and embedded systems. Therefore, pruning techniques such as in~\cite{han2016eie,han2015deep,han2015learning,pavlo2017pruning} are used to compress the models by removing unimportant connections or neurons without losing accuracy. In this paper, parameter pruning is carried out by zeroing out weight values on the basis of the lowest L1-norm (i.e., to prune the weights that have the smallest absolute values) as in~\cite{2017-ICMR-Uchida}, and how it affects watermark detection is explored.
\section{Proposed DNN Watermarking\label{sec:proposed}}
\subsection{Overview}
An overview of image classification with the proposed method is depicted in Fig.~\ref{fig:overview}. In the proposed DNN watermarking, model $f$ is trained with both clean images and images transformed by using secret key $K$. Such trained models are effective in classifying both plain images and transformed ones. This property enables us to verify the ownership of models. In addition, the watermark in the proposed watermarking cannot be removed, and adding a new watermark will decrease the model's accuracy. Therefore, the proposed method is piracy-resistant.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{overview}
\caption{Overview of image classification with proposed DNN watermarking\label{fig:overview}}
\end{figure}
\subsection{Block-wise Transformation with Secret Key}
We use a block-wise negative/positive transformation with a secret key as in~\cite{maung2021block} to transform input images before training and validation of model ownership. The following are steps for transforming input images, where $c$, $w$, and $h$ denote the number of channels, width, and height of an image tensor $x \in {[0, 1]}^{c \times w \times h}$.
\begin{enumerate}
\item Divide $x$ into blocks with a size of $M$ such that \\$\{B_{(1,1)}, \ldots, B_{(\frac{w}{M}, \frac{h}{M})}\}$.
\item Transform each block tensor $B_{(i, j)}$ into a vector \\$b_{(i,j)} = [b_{(i,j)}(1), \ldots, b_{(i,j)}(c \times M \times M)]$.
\item Generate key $K$, which is a binary vector, i.e.,
\begin{equation}
K = [K_1, \dots, K_k, \dots, K_{(c\times M \times M)}], K_k \in \{0, 1\},
\end{equation}
where the value of the occurrence probability $P(K_k)$ is $0.5$.
\item Multiply each pixel value in $b_{(i, j)}$ by $255$ to be at $255$ scale with 8 bits.
\item Apply negative/positive transformation to every vector $b_{(i, j)}$ with $K$ as
\begin{equation}
b'_{(i, j)}(k) = \left\{
\begin{array}{ll}
b_{(i, j)}(k) & (K_k = 0)\\
b_{(i, j)}(k) \oplus (2^L - 1) & (K_k = 1),
\end{array}
\right.
\end{equation}
where $\oplus$ is an exclusive or (XOR) operation, $L$ is the number of bits used in $b_{(i, j)}(k)$, and $L = 8$ is used in this paper.
\item Divide each pixel value in $b'_{(i, j)}$ by $255$ to be at $[0, 1]$ scale.
\item Integrate the transformed vectors to form an image tensor $\hat{x} \in {[0, 1]}^{c \times w \times h}$.
\end{enumerate}
An example of images transformed by negative/positive transformation with different block sizes is shown in Fig.~\ref{fig:images}.
\begin{figure*}[h]
\centering
\subfloat[Original]{\includegraphics[width=0.16\linewidth]{dog}%
\label{fig:dog}}
\hfil
\subfloat[$M = 2$]{\includegraphics[width=0.16\linewidth]{np_2}%
\label{fig:np_2}}
\hfil
\subfloat[$M = 4$]{\includegraphics[width=0.16\linewidth]{np_4}%
\label{fig:np_4}}
\hfil
\subfloat[$M = 8$]{\includegraphics[width=0.16\linewidth]{np_8}%
\label{fig:np_8}}
\hfil
\subfloat[$M = 16$]{\includegraphics[width=0.16\linewidth]{np_16}%
\label{fig:np_16}}
\hfil
\subfloat[$M = 32$]{\includegraphics[width=0.16\linewidth]{np_32}%
\label{fig:np_32}}
\caption{Example of block-wise transformed images\label{fig:images}}
\end{figure*}
\subsection{Watermark Embedding}
A pattern caused by the transformation with key $K$ serves as a watermark in the proposed method. To embed the watermark in a DNN model, the model is trained by using transformed images. Let $X = \{x^1, \ldots, x^N\}$ be a set of training images and $Y = \{y^1,\ldots,y^N\}$ be a set of their respective truth labels in a one-hot vector. Algorithm~\ref{algo:embed} shows the watermark embedding process during training. Every image in $X$ is transformed with key $K$ to obtain a set of transformed images $\hat{X} = \{\hat{x}^1,\ldots,\hat{x}^N\}$. Model $f$ is trained by using both $X$ and $\hat{X}$.
\begin{algorithm}
\caption{Watermark Embedding\label{algo:embed}}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE{$\{X, Y\}, K$}
\ENSURE{$f$}
\STATE{$\hat{X} \leftarrow$ \textsc{Transform} ($X, K$)}
\STATE{$f \leftarrow$ \textsc{Train} ($X, Y$)}
\STATE{$f \leftarrow$ \textsc{Train} ($\hat{X}, Y$)}
\end{algorithmic}
\end{algorithm}
\subsection{Watermark Detection}
To detect embedded watermarks, a statistical watermark-extraction method is used in the model inference. Let $X_{\text{test}} = \{x_{\text{test}}^1,\ldots,x_{\text{test}}^k,\\\ldots,x_{\text{test}}^s\}$ be a set of test images. Every image in $X_{\text{test}}$ is transformed with key $K$ to obtain $\hat{X}_{\text{test}} = \{\hat{x}_{\text{test}}^1,\ldots,\hat{x}_{\text{test}}^k,\ldots,\hat{x}_{\text{test}}^s\}$. Notably, $X_{\text{test}}$ is not a special pre-defined trigger set unlike conventional methods, so it can be a set of any test images within a classifier's distribution. In a typical image-classification scenario, $f$ takes a test image ($x_{\text{test}}^k$) and outputs a vector of unnormalized log probabilities (i.e., logits) as $f(x_{\text{test}}^k)$. In this paper, in accordance with this scenario, the class label of $x_{\text{test}}^k$ is estimated with the largest predicted probability, as $y_{\text{test}}^k = \text{argmax}(x_{\text{test}}^k)$.
Let $Y_{\text{label}} = \{y_{\text{test}}^1,\ldots,y_{\text{test}}^s\}$ be a set of predicted labels for $X_{\text{test}}$ and $\hat{Y}_{\text{label}} = \{\hat{y}_{\text{test}}^1,\ldots,\hat{y}_{\text{test}}^s\}$ be a set of predicted labels for $\hat{X}_{\text{test}}$. To evaluate the matching rate between $Y_{\text{label}}$ and $\hat{Y}_{\text{label}}$, the watermark detection accuracy $\tau$ is defined by
\begin{equation}
\tau = \frac{1}{s}\sum_{k=1}^{s} \mathbbm{1} (y_{\text{test}}^k = \hat{y}_{\text{test}}^k), \label{eq:tau}
\end{equation}
where $s$ is the number of test images, and $\mathbbm{1}(\text{condition})$ is a value of one if the condition is satisfied, otherwise a value of zero.
To verify the ownership of a model, an inspector needs to set a threshold $th$. By using $th$, the watermark detection process is carried out as in Algorithm~\ref{algo:detect}. If $\tau$ is greater than $th$, the ownership verification is successful, and model $f$ is judged to be owner O's model.
\begin{algorithm}
\caption{Watermark Detection\label{algo:detect}}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE{$f, X_{\text{test}}, K, th$}
\ENSURE{Successful or Unsuccessful}
\STATE{$\hat{X}_{\text{test}} \leftarrow$ \textsc{Transform} ($X_{\text{test}}, K$)}
\STATE{$\tau \leftarrow$ \textsc{Calculate\_Tau} ($f, X_{\text{test}}, \hat{X}_{\text{test}}$)}
\COMMENT{Equation~\ref{eq:tau}}
\IF{$\tau > th$}
\STATE{Successful}
\ELSE
\STATE{Unsuccessful}
\ENDIF
\end{algorithmic}
\end{algorithm}
\subsection{Properties of Proposed Method}
The proposed DNN watermarking method holds the following important properties:
\begin{itemize}
\item \textbf{Piracy-Resistance:} Original watermarks in a model cannot be removed, and adding new watermarks will decrease the model's accuracy.
\item \textbf{Low Computation Cost:} The block-wise operation can be efficiently implemented by using vectorized operations, and thus, \RA{pre-processing images with block-wise transformation in the proposed watermarking does not cause any noticeable overheads during training/inference.}
\item \textbf{Watermark Detection without a Trigger Set:} The proposed method uses a secret key to verify ownership. Therefore, a special trigger set with pre-defined labels for detecting a watermark is not required in the proposed method.
\end{itemize}
\section{Experiments\label{sec:experiments}}
\subsection{Setup\label{sec:setup}}
We conducted image classification experiments on the CIFAR-10 \\dataset~\cite{2009-Report-Krizhevsky} with a batch size of 128 and live augmentation (random cropping with padding of 4 and random horizontal flip) on a training set. CIFAR-10 consists of 60,000 color images (dimension of $32 \times 32 \times 3$) with 10 classes (6000 images for each class) where 50,000 images are for training and 10,000 for testing. We used deep residual networks~\cite{2016-CVPR-He} with 18 layers (ResNet18) and trained models for $200$ epochs with cyclic learning rates~\cite{2017-Arxiv-Smith} and mixed-precision training~\cite{2017-Arxiv-Micikevicius}. The parameters of the stochastic gradient descent (SGD) optimizer were a momentum of $0.9$, a weight decay of $0.0005$, and a maximum learning rate of $0.2$.
\subsection{Classification Performance and Watermark Detection}
We trained models by using the proposed method under five block sizes (i.e., $M \in \{2, 4, 8, 16, 32\}$). We evaluated the models in terms of classification accuracy (ACC) under three conditions: using plain images (plain), using transformed images with correct key $K$, and using transformed images with incorrect key $K'$. We also calculated the watermark detection accuracy (WDA) $\tau$ for correct key $K$ and WDA $\tau'$ for incorrect key $K'$.
\RA{Correct key $K$ was generated by using a random number generator from the PyTorch platform with a seed value of 42 (64-bit integer), and incorrect key $K'$ was also generated by using the same random number generator with a seed value of 123 (64-bit integer).}
Table~\ref{tab:results} summarizes the results obtained under the above conditions. The models with a small block size such as $M = 2$ and $4$ performed better in detecting watermarks than that with $M = 8$, $16$, and $32$. The baseline model, which was a standard model trained by using plain images, was confirmed to have a low WDA because the model did not have a watermark. Since models with $M = 2$ and $4$ maintained a high classification accuracy when correct key $K$ was used, while the accuracy severely dropped when incorrect key $K'$ was given, we will focus on models with $M = 2$ and $4$ for further evaluation against attacks.
\robustify\bfseries
\sisetup{table-parse-only,detect-weight=true,detect-inline-weight=text,round-mode=places,round-precision=2}
\begin{table}
\caption{Classification Accuracy (\SI{}{\percent}) and Watermark Detection Accuracy (\SI{}{\percent}) of Protected Models and Baseline Model. Values were averaged over testing whole test set (10,000 images).\label{tab:results}}
\centering
\begin{tabular}{l|S|SS|SS}
\toprule
& {ACC} & {ACC} & {WDA} & {ACC} & {WDA}\\
{Model} & {(plain)} & {($K$)} & {($\tau$)} & {($K'$)} & {($\tau'$)}\\
\midrule
{$M = 2$} & 92.74 & 93.43 & 95.87 & 10.53 & 10.260\\
{$M = 4$} & 92.99 & 92.24 & 94.20 & 15.55 & 15.75\\
\midrule
{$M = 8$} & 93.52 & 87.25 & 89.18 & 73.40 & 75.00\\
{$M = 16$} & 93.71 & 89.26 & 90.50 & 82.21 & 83.87\\
{$M = 32$} & 93.88 & 89.00 & 91.08 & 85.51 & 87.78\\
\midrule
{Baseline} & 95.45 & 11.34 & 11.43 & 12.02 & 12.12\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Robustness Against Fine-tuning Attacks}
As described in the threat model (see Section~\ref{sec:threat}), we assumed an attacker obtains a small subset of training dataset $\mathcal{D'}$ ($\left| \mathcal{D}' \right| \in \{100, 500, 5000\}$). We fine-tuned the models with $M = 2$ and $4$ by using $\mathcal{D}'$ and new key $K'$ to embed a new watermark for 30 epochs with the same training settings as in Section~\ref{sec:setup} \RA{and Algorithm~\ref{algo:embed}.}
Table~\ref{tab:fine-tune} shows the results of fine-tuning attacks: model accuracies before and after fine-tuning, WDA $\tau$ for correct key $K$, and WDA $\tau'$ for new key $K'$. In any of the cases, fine-tuning attacks impaired the model accuracy, and WDA $\tau$ was greater than WDA $\tau'$. Therefore, the proposed method was confirmed to have resistance against piracy attacks.
\robustify\bfseries
\sisetup{table-parse-only,detect-weight=true,detect-inline-weight=text,round-mode=places,round-precision=2}
\begin{table*}
\centering
\caption{Classification Accuracy (\SI{}{\percent}) and Watermark Detection Accuracy (\SI{}{\percent}) of Protected Models Under Fine-Tuning Attacks. Values were averaged over testing whole test set (10,000 images).\label{tab:fine-tune}}
\begin{tabular}{l|SS|SSS|SSS|SSS}
\toprule
& & & \multicolumn{3}{c|}{\circled{1} $\left| \mathcal{D}' \right| = 100$} & \multicolumn{3}{c|}{\circled{2} $\left| \mathcal{D}' \right| = 500$} & \multicolumn{3}{c}{\circled{3} $\left| \mathcal{D}' \right| = 5000$}\\
& {ACC} & {WDA} & {Fine-tuned} & {WDA} & {WDA} & {Fine-tuned} & {WDA} & {WDA} & {Fine-tuned} & {WDA} & {WDA}\\
{Model} & {(plain)} & {($\tau$)} & {ACC} & {($\tau$)} & {($\tau'$)} & {ACC} & {($\tau$)} & {($\tau'$)} & {ACC} & {($\tau$)} & {($\tau'$)}\\
\midrule
{$M = 2$} & 92.74 & 95.87 & 89.44 & 93.64 & 13.26 & 83.59 & 88.84 & 31.93 & 86.37 & 87.11 & 84.26\\
{$M = 4$} & 92.99 & 94.20 & 91.79 & 93.46 & 16.50 & 87.50 & 89.90 & 23.14 & 82.62 & 71.24 & 69.15\\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Robustness Against Pruning Attacks}
We observed the classification accuracy and watermark detection accuracy $\tau$ under different pruning rates. Figure~\ref{fig:prune_acc} shows a graph of accuracy against pruning rates. The proposed method was robust against up to a pruning rate of \SI{60}{\percent}. After pruning more than \SI{60}{\percent}, both the accuracy and $\tau$ dropped.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{prune_acc}
\caption{Classification accuracy under pruning attacks\label{fig:prune_acc}}
\end{figure}
\vspace{-2mm}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{prune_tau}
\caption{Watermark detection accuracy under pruning attacks\label{fig:prune_tau}}
\end{figure}
\vspace{-2mm}
\subsection{High-level Comparison with State-of-the-art Methods}
Table~\ref{tab:comparison} provides a high-level overview of state-of-the-art DNN watermarking methods in black-box settings. Embedding and verification methods vary from method to method. Most of the existing methods~\cite{2018-USENIX-Yossi,2020-NCA-Le,2018-ACCCS-Zhang,2019-NIPS-Fan} are not robust to piracy attacks as described in~\cite{li2019piracy}. In contrast, the watermark patterns used in the proposed method and Li et al.'s method~\cite{li2019piracy} are directly dependent on a model's accuracy. Therefore, piracy attacks will deteriorate a model's performance, and the original watermark detection will still be stronger than the pirated one. Note that the work in~\cite{li2019piracy} was evaluated only on a small convolutional network, and the proposed method was tested on a residual network with 18 layers (ResNet18). Therefore, the effectiveness of the proposed method was confirmed for practical scenarios.
\robustify\bfseries
\sisetup{table-parse-only,detect-weight=true,detect-inline-weight=text,round-mode=places,round-precision=2}
\begin{table*}
\centering
\caption{High-Level Comparison With State-of-the-Art Black-Box DNN Watermarking Methods\label{tab:comparison}}
\begin{tabular}{lccc}
\toprule
{Model} & {Embedding Method} & {Verification Method} & {Piracy Resistance}\\
\midrule
{Adi et al.~\cite{2018-USENIX-Yossi}} & {Backdoor} & {Trigger Set} & {No}\\
{Merrer et al.~\cite{2020-NCA-Le}} & {Adversarial Examples} & {Trigger Set} & {No}\\
{Zhang et al.~\cite{2018-ACCCS-Zhang}} & {Watermarked Examples} & {Trigger Set} & {No}\\
{Fan et al.~\cite{2019-NIPS-Fan}} & {Passport Layers + Trigger Set} & {Passports + Trigger Set} & {No}\\
{Li et al.~\cite{li2019piracy}}$^{\dagger}$ & {Null Embedding + Trigger Set} & {Watermark Accuracy + Trigger Set} & {Yes}\\
{Ours}$^{\ddagger}$ & {Learnable Image Transformation} & {Watermark Detection Accuracy} & {Yes}\\
\bottomrule
\multicolumn{4}{l}{$^{\dagger}$ Evaluated on a small convolutional neural network. $^{\ddagger}$ Evaluated on ResNet18.}\\
\end{tabular}
\end{table*}
\section{Conclusion\label{sec:conclusion}}
We proposed a novel model watermarking method that utilizes a learnable image transformation with a secret key for the first time. The proposed method trains a model by using both plain images and transformed ones and allows us to remotely verify the ownership of models. The results of experiments showed that the proposed method maintained a high classification accuracy, and watermarks in the proposed method could not be overwritten by piracy attacks. In addition, the proposed method was also robust against pruning attacks when parameters were pruned up to $\SI{60}{\percent}$.
\bibliographystyle{ACM-Reference-Format}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,742
|
Vdova je označení ženy, které zemřel manžel. Vdovec je označení pro muže, kterému zemřela manželka. Ovdovělých žen je obecně více než vdovců - mužů, neboť ženy se obvykle dožívají průměrně vyššího věku a také bývají při sňatku mladší než muži. Výraz vdovec je zřejmě novější a jeho postavení nikdy nebylo tak problematické. Ovdovění se dnes uvádí jako údaj o rodinném stavu.
Ve starších společnostech, kdy se staral o obživu rodiny především muž, měl status vdovy závažnější společenské důsledky.
Historie
Etymologie
České slovo "vdova" má paralely ve všech slovanských i neslovanských jazycích. Odvozuje se od praslovanského slova vydova, které vzniklo odvozením od staroindického vidhevá. Je možná souvislost s indickým vidhú (ten, kdo něco ztratil). Podobnost je možné spatřit v dalších indoevropských jazycích, např. v jihoslovanském udovica, italském vedova, anglickém widow, německém Witwe atd.
Společenský význam
Postavení vdovy a jejích dětí (sirotků) bylo ve starších zemědělských společnostech obtížné. Ztráta muže znamenala většinou ztrátu obživy a často i majetku. Extrémní příklad představuje indický obyčej satí, kdy byla vdova při pohřbu svého muže upálena s ním. Je pravděpodobné, že se tento obyčej v dávných dobách praktikoval i jinde. V Indii byl zakázán teprve v 19. století. Obyčej truchlení se i v českých zemích dodržoval až do počátku 20. století, týkal se například černého oblečení a různých dalších omezení, jak o nich hovoří například "Babička" Boženy Němcové.
Už sumerští a starobabylonští panovníci se naproti tomu na svých pomnících chlubili, že za jejich vlády "bohatý nečinil bezpráví sirotě, mocný neubližoval vdově" (Gudeova socha). Povinnost ochrany "bezdomovců, vdov a sirotků" se velmi často připomíná v Bibli, například "Buď proklet, kdo převrací právo bezdomovce, sirotka a vdovy" (Dt 27, 19) nebo "Pečuj o vdovy, které jsou skutečně opuštěné" (1Tm 5, 3).
Jiný společenský problém představovaly mladé vdovy po padlých bojovnících v raném středověku. Byly často v mladém věku a měly dobré konexe, takže panovníkům často dělaly starosti. V 10. a 11. století proto vznikaly "vdovské" kláštery, což byla patrně i funkce kláštera svatého Jiří na Pražském hradě. Postavení vdov se zlepšilo ve městech, kde mohly převzít živnost svého manžela a někdy se dokonce mohly stát členkami cechů. Šlechtické ženy měly v pozdním středověku a v novověku právo na "obvěnění" – podíl vdovy na majetku po zemřelém manželovi. V poslední době (viz registrované partnerství) byli ovdovělým postaveni na roveň i pozůstalí registrovaní partneři.
Demografie
V prosinci 2010 pobíralo v ČR vdovský důchod 579 000 žen v průměrném věku 57 let a 95 000 mužů (14 % ovdovělých) v průměrném věku 53 let. V roce 1990 bylo ve Francii 3,26 milionu vdov proti 633 000 vdovců (16,3 % ovdovělých), čili zhruba poměr 5:1. Poměr byl ještě větší po válkách, například v SSSR a v Německu po roce 1945.
Kultura
V ději komické opery B. Smetany "Dvě vdovy" z roku 1874 je zachycen obyčej truchlení i způsoby jeho ukončení. Opereta F. Lehára "Veselá vdova" z roku 1905 představuje bohatou vdovu jako "dobrou partii".
Metafory
V typografii pojem "vdova" označuje poslední řádek odstavce, který přetekl na následující stránku. Naopak "sirotek" je první řádek odstavce, který je zároveň posledním řádkem na stránce.
"Slaměný vdovec" je ženatý muž, který je přechodně doma sám bez manželky.
"Černá vdova" je pavouk, snovačka jedovatá; lidový název pochází z mylného přesvědčení, že samička samečka po páření sežere.
Odkazy
Reference
Související články
Rodina
Královna vdova
Zelená vdova
Černá vdova
Sirotek
Widowmaker
Externí odkazy
Manželství
Rodinné právo
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 316
|
Mollywood buzz: Lucifer movie launch, Puthan Panam teaser, Godha, 1971 Beyond Borders songs, celebrity weddings make headlines
Prithviraj Sukumaran's debut directorial venture Lucifer starring Mohanlal will go on floors in May 2018.
April 3, 2017 16:26 IST
Do not miss the Mollywood buzz on Puthanpanam, Lucifer, Mohanlal, 1971 Beyond Borders and GodhaFacebook
From the release of two songs from Mohanlal's 1971 Beyond Borders to Mammootty releasing the teaser of Puthan Panam to the official launch of Prithviraj Sukumaran's debut directorial venture Lucifer, the week gone by was packed with excitement for Mollywood.
Here's the wrap:
Mammootty's Puthan Panam teaser
Mammootty is basking in the success of his recent release The Great Father, and the teaser of his next, Puthan Panam, directed by Ranjith, hit the cyber space on Sunday, April 2. The 32-second video has been opened to a stupendous response. "And now here is the teaser for Ranjiettan's #PuthanPanam ! Another awesome look ! Vappichi and Renjiettan is one of my favourite combinations of all time. Waiting for their magic all over again yayyyy! [sic]," Dulquer posted while sharing the teaser on his Facebook page. The video has gone viral with over 4.3 lakh views within a day of its release online.
Official launch of Lucifer
In 2016, actor Prithviraj Sukumaran has announced his plans to direct superstar Mohanlal. Months after the big news made the headlines, the team of Lucifer held a press conference on Sunday evening attended by Mohanlal, Prithviraj, scriptwriter Murali Gopy and producer Antony Perumbavoor, among others. The team said Lucifer will go on floors in May 2018.
Mohanlal, Prithviraj Sukumaran, Murali Gopy, Antony Perumbavoor during the press meeting of Lucifer movie.Facebook
1971 Beyond Borders songs
Two songs of Mohanlal's upcoming movie 1971 Beyond Borders have been released online. The family song Oruvakkinaal, sung by MG Sreekumar and Swetha Mohan, shows Mohanlal's character, spending vacation at home. Penned by Nikhil S Mathattil and composed by Rahul Subrahmanian, the video has been trending on the top position on YouTube, at the time of reporting.
The makers have also released a Tamil song from 1971 Beyond Borders that begins with the lines Pesipokuthu. The romantic song features Allu Sirish and Srushti Dange, and has been rendered by Vipin Lal, NK Priyanka and Meenakshi Ilayaraja. While Siddarth Vipin has composed the music, Mohan Rajan has written the lyrics of the song from the directorial venture of Major Ravi.
Godha song released
Watch the first song from the movie Godha.Godha - Official/Facebook
The first song from the upcoming movie, Godha, also hit the cyber space on Sunday. Gowry Lekshmi has rendered her voice for the melody, which begins with the lines Aaro Nenjil. The song featuring Tovino Thonas and Wamiqa Gabbi has been opened to good response, and has crossed 2.5 lakh views within 24 hours of its release. While Manu Manjith has penned the lyrics, Shaan Rahman has composed the music.
Vineeth Sreenivasan
I fell in love with this song the moment I heard it from Shaan s studio.. Godha is going to be an amazing experience for the audience and this song is the perfect curtain raiser for what's in store!! Pls put ur headphones for this song, it's worth it!!
While Maqbool Salmaan entered the wedlock with Almaz on Saturday, actress Gauthami Nair got married to director Srinath Rajendran in a private function on Sunday in Alappuzha. Meanwhile, actor Dhyan Sreenivasan got engaged to Thiruvanathapuram-based Arpita Sebastian in a grand ceremony on Sunday.
Mammootty's The Great Father beats Mohanlal's Pulimurugan
Check Dhyan Sreenivasan, Arpita Sebastian's engagement photos
Check Maqbool Salmaan's wedding photos
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,154
|
Daddy helped Bekah with drinking out of a cup.
We got Bekah a new swing. She just loves swinging by herself. Bekah giggles and giggles when we push her.
Very fun pictures - I like the ones of her in the swing!!!! She looks like she is having a great time!
Those swing pictures are awesome!!!
Babies in swings are awesome! Most of Naomi's time at the park is spend in the swing. Your daughter is adorable!
Is there something on my nose?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,541
|
Plan Dawesa (od nazwiska laureata Pokojowej Nagrody Nobla za rok 1925 Charlesa Gatesa Dawesa) – historyczny plan gospodarczy rozłożenia na wiele lat niemieckich reparacji wojennych po I wojnie światowej oraz udzielenia Niemcom pożyczek w wysokości 200 mln dolarów na ich spłatę. Opracowany został w celu stabilizacji niemieckiej gospodarki powojennej, przez zespół ekspertów pod kierownictwem amerykańskiego bankiera Charlesa Dawesa, przyjęty 16 sierpnia 1924 podczas obrad tzw. konferencji londyńskiej i uchwalony jako obowiązujący przez Reichstag 30 sierpnia 1924.
Przebieg
Kiedy po dotkliwej hiperinflacji w Niemczech, we wrześniu 1923 roku doszło w końcu do względnej stabilizacji walutowej świat postanowił wspólnymi siłami podjąć próbę uregulowania kwestii reparacji powojennych. Dojście do władzy Partii Pracy w Wielkiej Brytanii i Kartelu Lewicy we Francji dodatkowo sprzyjało kompromisowi. Powołany w tej kwestii z inicjatywy USA międzynarodowy komitet ekspertów określił raty, które Niemcy miały spłacać przez najbliższe 5 lat. Wzrastały one stopniowo od 200 do 600 mln dolarów rocznie. Tym samym, całość niemieckich zobowiązań zostałaby spłacona w ciągu czterdziestu kilku lat. Planowano poddać niemieckie finanse kontroli, a spłaty reparacji zabezpieczyć m.in. dochodami z kolei, które przekształcono w tym celu w spółkę akcyjną (Deutsche Reichsbahn-Gesellschaft (DRG)). Ponadto Bank Rzeszy został uniezależniony od rządu Niemiec. Niemcy otrzymały dostęp do kredytów oraz pożyczkę na poczet stabilizacji walutowej w wysokości 200 mln USD. Wycofawszy się z Zagłębia Ruhry, Francja i Belgia ewakuowały ok. 1/3 okupowanych dotychczas stref w Nadrenii.
Skutki
Zgoda na przyjęcie Planu Dawesa zapadła podczas konferencji w Londynie, która odbyła się w dniach 16 lipca – 30 sierpnia 1924 r. z udziałem m.in. Wielkiej Brytanii i Francji. Plan Dawesa wszedł w życie dnia 30 sierpnia 1924 r.
Polityczna sytuacja powojennych Niemiec została ostatecznie znormalizowana już w roku 1925, co potwierdzał traktat z Locarno. Skutkiem stabilizacji był jednak krótki kryzys poinflacyjny w tym kraju. Przyniósł on między innymi bankructwo największego spekulacyjnego przedsięwzięcia z czasów hiperinflacji – koncernu Hugona Stinnesa.
Plan Dawesa, jako wieloletnia strategia prowadzenia polityki gospodarczej wobec Niemiec, przyczynił się do czasowego zmniejszenia napięcia politycznego i gospodarczego w Europie. W rezultacie jego realizacji jednakże, Niemcy pożyczyły więcej pieniędzy (20 mld marek), aniżeli spłaciły (10,3 mld marek). To między innymi dlatego, w roku 1929 Plan Dawesa zastąpiony został przez kolejny plan dotyczący reparacji – Plan Younga.
30 stycznia 1937 Adolf Hitler oświadczył w Reichstagu, że pozbawia koleje niemieckie i Bank Rzeszy ich dotychczasowego charakteru, przywracając im pełną podległość Rzeszy Niemieckiej. Oznaczało to w istocie wypowiedzenie układu londyńskiego z 16 sierpnia 1924.
Przypisy
Bibliografia
Wojciech Morawski: Zarys powszechnej historii pieniądza i bankowości, wydawnictwo Trio, Warszawa 2002.
Linki zewnętrzne
Final Protocol, Agreement on "the Experts" Plan and Protocol concerning the Contributions to be made from the German Budget and the Institution of Control over certain Revenue and Taxes, signed at London, August 9 and 16, 1924 (tekst angielski)
Przekład polski: Przegląd polityczny Załącznik do Zeszytu 11 i 12 Tomu I 1924 s. 97–125
Problem niemieckich reparacji po I wojnie światowej
1924 w Niemczech
Historia gospodarcza Niemiec
Dawesa
Republika Weimarska
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,793
|
Lànec del Labrador (Camptorhynchus labradorius) era un ocell de l'Amèrica del Nord; té la distinció de ser la primera espècie d'ocells de l'Amèrica del Nord endèmics que es va extingir després del Intercanvi colombí, amb l'última observació coneguda que es va produir el 1878 a Elmira, Nova York. Ja era un ànec rar abans que arribessin els colons europeus i, com a conseqüència de la seva raresa, la informació sobre l'ànec del Labrador no és abundant, tot i que algunes, com el seu hàbitat, les seves característiques, els seus hàbits alimentaris i els motius de la seva extinció, són conegut. Hi ha 55 exemplars de l'ànec del Labrador conservats a les col·leccions dels museus de tot el món.
Descripció
El plomatge de la femella era gris. Tot i que amb un patró dèbil, el patró era semblant als ànecs del gènere Melanitta. El plomatge del mascle era blanc i negre amb un patró semblant als ànecs del gènere Somateria, però les ales eren completament blanques, excepte les primàries. La tràquea del mascle era semblant als ànecs del gènere Melanitta. Es va produir una expansió del tub traqueal a l'extrem anterior, i dues ampliacions (a diferència d'una ampliació com es veu amb els ànecs del gèner Melanitta) es trobaven a prop del centre del tub. La bulla era òssia i rodona, que sortia pel costat esquerre. Aquesta bulla asimètrica i òssia era diferent de la dels ànecs del gèner Melanitta; aquesta bulla era semblant als ànecs del gènere Somateria i les bulles d'ànec arlequí. L'ànec del Labrador ha estat considerat el més enigmàtic de tots els ocells de Nord-amèrica.
L'ànec del Labrador tenia un cap allargat amb ulls petits i rodons. El seu bec era gairebé tan llarg com el cap. El cos era baix i deprimit amb els peus curts i forts que estaven molt per darrere del cos. Les plomes eren petites i la cua era curta i arrodonida. L'ànec del Labrador pertany a un gènere monotípic.
Hàbitat
L'ànec del Labrador migrava anualment i hivernava a les costes de Nova Jersey i Nova Anglaterra a l'est dels Estats Units, on afavoria les costes arenoses del sud, badies protegides, ports i entrants, i es reproduïa al Labrador i al nord del Quebec a l'estiu. El fill de John James Audubon va informar de veure un niu que pertanyia a l'espècie, al Labrador. Alguns creuen que pot haver posat els ous a les illes del golf de Sant Llorenç. La biologia reproductora de l'ànec Labrador és en gran part desconeguda.
Dieta
L'ànec del Labrador s'alimentava de petits mol·luscs, i alguns pescadors van informar d'haver-lo capturat a les línies de pesca esquitades amb musclos. L'estructura del bec va ser molt modificada de la de la majoria dels ànecs, tenint una punta ampla i aplanada amb nombroses lamel·les al seu interior. D'aquesta manera, es considera una contrapart ecològica de l'èider de Steller del Pacífic Nord/Àsia septentrional. El bec també era particularment tou i es podria haver utilitzat per sondejar sediments en cerca d'aliment.
Un altre ànec, sense cap parentiu, amb una morfologia de bec similar (però encara més especialitzada) és l'australià ànec cullerot zebrat, que s'alimenta en gran part de plàncton, però també de mol·luscs; la condició de l'ànec del Labrador probablement s'assemblés a la de l'ànec blau amb aspecte extern.
El seu peculiar bec suggereix que menjava mariscs i crustacis de llims i aigües poc profundes. És possible que l'ànec del Labrador hagi sobreviscut menjant caragols.
Extinció
Es creu que l'ànec del Labrador sempre va ser rar, però entre 1850 i 1870 les poblacions van disminuir encara més. La seva extinció (després del 1878) encara no està completament explicat. Tot i que es caçava per menjar, es considerava que aquest ànec tenia un mal sabor, es podria ràpidament i tenia un preu baix. En conseqüència, els caçadors no van buscar molt. No obstant això, els ous poden haver estat sobreexplotats i també poden haver estat depredats pel comerç de plomes a la seva àrea de reproducció. Un altre possible factor en l'extinció de l'au va ser el descens de musclos i altres mariscs dels quals es creu que es van alimentar als seus llocs d'hivern, a causa del creixement de la població i la indústria a la Costa Est dels Estats Units. Tot i que tots els ànecs marins s'alimenten fàcilment de mol·luscs d'aigües poc profundes, cap espècie d'ocells de l'Atlàntic occidental sembla haver estat tan dependent d'aliments com l'ànec del Labrador.
Una altra teoria que es va dir que va conduir a la seva extinció va ser un augment enorme de la influència humana sobre els ecosistemes costaners d'Amèrica del Nord, provocant que les aus fugissin dels seus nínxols i trobessin un altre hàbitat. Aquests ànecs van ser els únics ocells que es van limitar a la costa americana de l'Atlàntic Nord, de manera que canviar de nínxol va ser una tasca difícil. L'ànec del Labrador es va extingir a finals del .
Taxonomia
L'ànec del Labrador és considerat un ànec marí. Una diferència bàsica en la forma del procés del metacarpi i es divideix els ànecs marins en dos grups:
Bucephala i Mergus
Somateria, Melanitta, Histrionicus, Clangula i Camptorhynchus
La posició del foramen nutritiu del tarsometatars també separa els dos grups d'ànecs marins. En el primer grup, el foramen és lateral a l'eix llarg de la ranura lateral de l'hipotars; en el segon, el foramen està sobre o medial a l'eix d'aquest solc.
L'ànec del Labrador també era conegut com l'ànec piquet i l'ànec mofet, el primer era un nom vernacle que compartia amb l'ànec negre frontblanc i el morell d'ulls grocs (i fins i tot la garsa de mar americana), fet que ha provocat dificultats en la interpretació de registres antics d'aquestes espècies. Tots dos noms fan referència a la cridanera coloració blanca/negre del mascle. Un altre nom comú era ànec de banc de sorra, referit al seu hàbit d'alimentar-se en aigües poc profundes. Els parents evolutius més propers de l'ànec del Labrador són aparentment són del gènere Melanitta.
Un estudi mitogenòmic sobre la col·locació de l'ànec del Labrador va trobar que l'espècie estava estretament relacionada amb l'èider de Steller, tal com es mostra a continuació.
Referències
Bibliografia addicional
.
Cokinos, Christopher (2000): Hope is the Thing with Feathers. New York: Putnam, pp. 281–304.
Enllaços externs
Full informatiu sobre les espècies de la BirdLife
L'ànec del Labrador de Birds of America de John James Audubon
Medi Ambient del Canadà
Cignes, oques i ànecs del Canadà
Base de dades d'extincions marines Universitat d'East Anglia, Regne Unit
Anàtids
Mergins
Ocells d'extinció recent
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,446
|
\section{Introduction}
Ensemble learning is one of the most challenging recent approaches in statistical learning. Bagging (\cite{Brei-bag}), Boosting (\cite{Freund}), Stacking (\cite{Brei-stack}), and Random forests (\cite{RF}) have been declared to be the best of the chelf classifiers achieving very high performances when tested over tens of various datasets selected from the machine learning benchmark. All these algorithms had been designed for supervised learning, sometimes initially restricted to regression or binary classification. Several extensions are still under study: multivariate regression, multiclass learning, and adaptation to functional data or time series. \\
Very few developments exist for ensemble learning for unsupervised framework, clustering analysis and density estimation. Our work concerns the latter case which may be seen as a fundamental problem in statistics. Among the last developments, we found some extensions of boosting (\cite{Freund}) and stacking (\cite{Smyth}) to density estimation. \\
In this paper we suggest some simple algorithms for density estimation in the same spirit of bagging and stacking where the weak learners are histograms. We show by extensive simulations that aggregation gives rise to effective better estimates. We compare our algorithms to several algorithms for density estimation, some of them are simple like Histogram and kernel density estimators (KDE) and others rather complex like stacking and boosting which will be described in details. As we will show in the experiments we do, although the accuracy of our algorithm is not systematically higher than other ensemble methods, it is with no doubt simpler, more intuitive and computationally less expensive.
Boosting algorithms and stacking for density estimation are described in section 2. Section 3 describes our algorithms. Simulations and results are given in section 4 and concluding remarks and future work in section 5.
\section{A review of the existing algorithms}
In this section we review some density estimators obtained by aggregation. They may be classified in two categories depending on the aggregation form.\\
The first type has the form of linear or convex combination:
\begin{equation}
f_M(x)= \sum_{m=1}^M \alpha_m g_m(x)
\label{LC}
\end{equation}
where $g_m$ is typically a parametric or non parametric density model, and in general different values of $m$ refer typically to
\begin{itemize}
\item different parameters values in the parametric case or,
\item different kernels, or
\item different bandwidths for a chosen kernel for the kernel density estimators.
\end{itemize}
The second type of aggregation is multiplicative and is based on the ideas of high order bias reduction for kernel density estimation (\cite{Jones}). The aggregated density estimator has the form:
\begin{equation}
f_M(x)= \prod_{m=1}^M \alpha_m g_m(x)
\label{BR}
\end{equation}
\subsection{Linear or convex combination of density estimators}
This kind of estimators (\ref{LC}) has been used in several works with different construction schemes.
\begin{itemize}
\item In \cite{Rosset}, \cite{Ridgeway} and \cite{Song} the weak learners $g_m$ are introduced sequentially in the combination. At step $m$, $g_m$ is chosen to maximize the log likelihood of
\begin{equation} f_{m}(x)= (1 - \alpha) f_{m-1}(x) + \alpha g_m(x) \label{maj} \end{equation}
where $g_m$ is a density selected among a fixed class $\mathcal{H}$.\\
In \cite{Rosset} $g_m$ is selected among a non parametric family of estimators, and in \cite{Ridgeway} and \cite{Song}, it is taken to be a Gaussian density or a mixture of Gaussian densities whose parameters are estimated. Different methods are used to estimate both density $g_m$ and the mixture coefficient $\alpha$.
In \cite{Ridgeway} $g_m$ is a Gaussian density and the log likelihood of (\ref{maj}) is maximized using a special version of Expectation Maximization (EM) taking into account that a part of the mixture is known.
The main idea underlying the algorithms given by \cite{Rosset} and \cite{Song} is to use Taylor expansion around the negative log likelihood that we wish to minimize:
$$ \sum_i - \log (f_{m}(x_i)) = \sum_i - \log (f_{m-1}(x_i)) - \alpha \sum_i \frac{g_m(x_i)}{f_{m-1}(x_i)} + O(\alpha^2) $$
For $\alpha$ small we have the approximation
$$ \sum_i - \log (f_{m}(x_i)) \sim \sum_i - \log (f_{m-1}(x_i)) - \alpha \sum_i \frac{g_m(x_i)}{f_{m-1}(x_i)} $$
thus, minimizing the left side term is equivalent to maximizing $\sum_i \frac{g_m(x_i)}{f_{m-1}(x_i)}$.
All the algorithms described above are sequential and the number of weak learners aggregated may be fixed by the user.
\item \cite{Smyth} use stacked density estimator applying the same aggregation scheme as in stacked regression and classification (\cite{Wolpert}). The $M$ densities $g_m$ are fixed in advance (KDE with different bandwidths). The data set $\mathcal{L}=\{x_1,\dots,x_n\}$ is divided into $V$ cross validation subsets $\mathcal{L}_1,\dots,\mathcal{L}_V$. For $v=1,..,V$, denote $\mathcal{L}^{(-v)}=\mathcal{L}-\mathcal{L}_v$.
The $M$ models $g_1,\dots,g_M$ are fitted using the training samples $\mathcal{L}^{(-1)},\dots,\mathcal{L}^{(-V)}$, the obtained estimates are denoted $\widehat{g}_m^{(-1)},\dots,\widehat{g}_m^{(-V)}$ for all $m=1,\dots,M$. These models are then evaluated over the test sets $\mathcal{L}_1,\dots,\mathcal{L}_{V}$, getting the vectors $\widehat{g}_m^{(-v)}(\mathcal{L}_v)$ for $m=1,\dots,M,\, v=1,\dots,V$ put within a $n \times M$ matrix
$$
A=\left(
\begin{array}{cccc} \widehat{g}_1^{(-1)}(\mathcal{L}_1) & \dots & \dots & \widehat{g}_M^{(-1)}(\mathcal{L}_1) \\
\widehat{g}_1^{(-2)}(\mathcal{L}_2) & \dots & \dots & \widehat{g}_M^{(-2)}(\mathcal{L}_2)\\
\vdots & & & \vdots \\
\widehat{g}_1^{(-V)}(\mathcal{L}_{V}) & \dots & \dots & \widehat{g}_M^{(-V)}(\mathcal{L}_{V})
\end{array}\right)
$$
This matrix is used to compute the coefficients $\alpha_1, \dots, \alpha_M$ using the Expectation-Maximization algorithm. Finally, for the output model, we re-estimate the individual densities $g_1,\dots,g_M$ from the whole data $\mathcal{L}$.
\item In \cite{Rigo} the densities $\{g_m\}_{m=1,...,M}$ are fixed in advance like for stacking (KDE estimators with different bandwidths). The dataset is split in two parts. The first sample is used to estimate the densities $g_m$, whereas the coefficients $\alpha_m$ are optimized using the second sample. The splitting process is repeated and the aggregated estimators for each data split are averaged. The final model has the form
$$ f_M(x) = \frac{1}{card\{S\}}\sum_{s \in S} \tilde{g}^s_M(x)$$
where $S$ is the set of all the splits used and
$$ \tilde{g}^s_M(x) = \sum_{m=1}^M \alpha_m g_m^s(x)$$
is the aggregated estimator obtained from one split $s$ of the data, $g_m^s$ is the individual kernel density function estimated over the learning sample obtained from the split $s$. This algorithm is called {\it AggPure}.
\end{itemize}
\subsection{Multiplicative aggregation}
The only algorithm giving rise to this form of aggregation is the one described in \cite{DiMarz} called {\it BoostKde}. It is a sequential algorith where at each step $m$ the weak learner is computed as follows:
$$\hat{g}_m(x)=\sum \limits_{i=1}^{n} \frac{w_m(i)}{h}K\left(\frac{x-x_i}{h}\right)$$
where $K$ is a fixed kernel, $h$ its bandwidth, and $w_m(i)$ the weight of observation $i$ at step $m$. Like for boosting, the weight of each observation is updated:
$$w_{m+1}(i)=w_m(i)+\log\left(\frac{\hat{g}_m(x_i)}{\hat{g}_{m}^{(-i)}(x_i)}\right)$$
where $\hat{g}_{m}^{(-i)}(x_i)=\sum \limits_{j=1,j \neq i}^n \frac{w_m(j)}{h}K\left(\frac{x_j - x_i}{h}\right)$.
The output is given by $\hat{f}_M(x)= C \prod \limits_{m=1}^{M}\hat{g}_m(x)$, where $C$ is a normalization constant. The Algorithm is resumed in figure \ref{Dimarz}.
\begin{figure}[h]
\fbox{
\begin{minipage}{0.9\textwidth}
{\small \begin{enumerate}
\item For $i=1,\dots,n$, initialize the weigths of the observations $w_1(i)=\frac{1}{n}$ and fix the bandwidth $h$.
\item For $m=1$ to $M$:
\begin{enumerate}
\item Compute the weighted kernel estimate $$\widehat{g}_m(x)= \sum \limits_{i=1}^{n} \frac{w_m(i)}{h}K \left(\frac{x-x_i}{h} \right)$$
\item Update the weights $w_{m+1}(i)=w_{m}(i) + \log \left(\frac{\widehat{g}_m(x_i)}{\widehat{g}_{m}^{(-i)}(x_i)} \right)$
with $\widehat{g}_{m}^{(-i)}(x_i)=\sum \limits_{j=1,j \neq i}^n \frac{w_m(j)}{h}K\left(\frac{x_j - x_i}{h}\right)$.
\end{enumerate}
\item Output: $\hat{f}_M(x)= C \prod \limits_{m=1}^{M}\widehat{g}_m(x)$ ($C$ normalization constant such that $f_M$ integrates to unity).
\end{enumerate} }
\end{minipage}}
\caption{Boosting kernel density estimation algorithm ((BoostKde), \cite{DiMarz}) \label{Dimarz}}
\end{figure}
\section{Aggregating Histograms}
We suggest three new density estimators obtained by linear combination like in (\ref{LC}), all of them use histograms as weak learners. The first two algorithms may be parallelized and randomize the histograms. The third one is just a modification of Stacking.
The first algorithm is similar to Bagging (\cite{Brei-bag}). At each step $m$, a bootstrap sample of the original dataset is generated and $g_m$ is the histogram obtained from the generated bootstrap sample with a fixed number of equally spaced breakpoints. We will refer to this algorithm as {\it BagHist}, it is detailed in figure \ref{Baghist}.
\begin{figure}[!ht]
\fbox{
\begin{minipage}{0.9\textwidth}
\hspace{5mm}
{\small \begin{enumerate}
\item Let $\mathcal{L}$ be the original sample
\item For $m=1$ to $M$:
\begin{enumerate}
\item Let $\mathcal{L}^{m}$ be a bootstrap sample of $\mathcal{L}$
\item Set $g_m$ to be the histogram constructed over $\mathcal{L}^m$ with equispaced $L$ breakpoints.
\end{enumerate}
\item Output: $f_M(x)= \frac{1}{M} \sum_1^M g_m(x)$ \\
\end{enumerate} }
\end{minipage}}
\caption{Bagging of histograms ({\it BagHist})}
\label{Baghist}
\end{figure}
The second algorithm ({\it AggregHist}) works as follows: let $g_0$ be the histogram obtained with the data set at hand using equally spaced breakpoints denoted $\mathcal{B} = {b_1,b_2,...,b_L}$. Each weak learner $g_m$ is an histogram constructed over the same initial data set but using a randomly modified set of breakpoints; $\gamma$ is a tuning parameter which controls the variance of the perturbations. The algorithm is detailed in figure \ref{Aggreghist}.
\begin{figure}[!ht]
\fbox{
\begin{minipage}{0.9\textwidth}
\hspace{5mm}
{\small \begin{enumerate}
\item Let $\mathcal{L}$ be the original sample, $g_0$ be the histogram constructed over $\mathcal{L}$ and \\ $\mathcal{B} = \{b_1,b_2,...,b_L\}$ the set of the ordered optimized breakpoints.
\item For $m=1$ to $M$:
\begin{enumerate}
\item Set $\mathcal{B}^m = \{b^*_{(1)},b^*_{(2)},...,b^*_{(L)}\}$ the modified breakpoints obtained by setting \\ $b^*_l =b_l + \varepsilon_l$ where $\varepsilon_l \sim N(0,\sigma)$ and $\sigma = \gamma \; min_{1 < l \le L} \left\{ b_{l} - b_{l-1} \right\}$.
\item Set $g_m$ to be the histogram constructed over $\mathcal{L}$ using the breakpoints $\mathcal{B}^m$.
\end{enumerate}
\item Output: $f_M(x)= \frac{1}{M} \sum_1^M g_m(x)$ \\
\end{enumerate} }
\end{minipage}}
\caption{Aggregating histograms based on randomly perturbed breakpoints ({\it AggregHist})}
\label{Aggreghist}
\end{figure}
Finally, we introduce a third algorithm called {\em StackHist} where we replace in the stacking algorithm described in the previous section, the six kernel density estimators by three histograms with fixed number of breaks.
The values of the parameters used in these algorithms will be optimized. The procedure used will be described in the experiments section.
\section{Experiments}
We test several simulation models based on classical distributions and mixture models mostly used in the cited works and algorithms described above. The sample size is fixed at $n=100, 500, 1000$. \\
We first show the estimates obtained using {\it BagHist} and {\it AggregHist} and analyze the effect of the number $M$ of histograms aggregated, and then compare them to the other algorithms. \\
\indent Up to our knowledge the existing algorithms for density estimation by aggregation have never been compared over a common benchmark simulation data.
\subsection{Models used for the simulations}
We denote by $\mathcal{M}1,\dots,\mathcal{M}11$ the different simulation models which will be grouped according to their difficulty level.
\begin{itemize}
\item Some standard densities used in \cite{DiMarz}:
($\mathcal{M}1$): standard Gaussian density $N(0,1)$
($\mathcal{M}2$): standard exponential density $f(x)=\left\{\begin{array}{lc} 0 & x<0\\ e^{-x}& x \geq 0\end{array} \right.$
($\mathcal{M}3$): a Chisquare density $\chi^2_{10}$
($\mathcal{M}4$): a Student density $t_4$
\item Some Gaussian mixtures taken from \cite{DiMarz} and \cite{Smyth}:
($\mathcal{M}5$): $0.5 N(-1,0.3)+0.5N(1,0.3)$
($\mathcal{M}6$): $0.5 N(-2.5,1)+0.5N(2.5,1)$
($\mathcal{M}7$): $0.25N(-3,0.5)+0.5N(0,1)+0.25N(3,0.5)$
\item Gaussian mixtures used in from \cite{Rigo} and taken from \cite{Marron}
($\mathcal{M}8$): the Claw density, $0.5 N(0,1)+\sum \limits_{i=0}^{4}\frac{1}{10}N\left(\frac{i}{2}-1,\frac{1}{10}\right)$
($\mathcal{M}9$): the Smooth Comb Density,
$\sum \limits_{i=0}^{5} \frac{2^{5-i}}{63} N \left(\frac{65-96\frac{1}{2^{i}}}{21}, \frac{\left(\frac{32}{63}\right)^2}{2^{2i}}\right)=$
\scriptsize
\begin{equation*}
\frac{32}{63}N \left(-\frac{31}{21}, \frac{32}{63}\right)+\frac{16}{63}N \left(\frac{17}{21}, \frac{16}{63}\right)+\frac{8}{63}N \left(\frac{41}{21}, \frac{8}{63}\right)+\frac{4}{63}N \left(\frac{53}{21}, \frac{4}{63}\right)+\frac{2}{63}N \left(\frac{59}{21}, \frac{2}{63}\right)+\frac{1}{63}N \left(\frac{62}{21}, \frac{1}{63}\right)
\end{equation*}
\normalsize
\item Mixtures density with highly inhomogeneous smoothness as in \cite{Rigo}:
($\mathcal{M}10$): $0.5 N(0,1) + 0.5 \sum \limits_{i=1}^{10} \mathbf{1}_{\left(\frac{2(i-1)}{T},\frac{2i-1}{T} \right]}$
($\mathcal{M}11$): $0.5 N(0,1) + 0.5 \sum \limits_{i=1}^{14} \mathbf{1}_{\left(\frac{2(i-1)}{T},\frac{2i-1}{T} \right]}$
\end{itemize}
All the simulations are done with the R software, and for models $\mathcal{M}8$ and $\mathcal{M}9$ we use the {\sf benchden} package.
We show below in figures \ref{dens1}, \ref{dens2} and \ref{dens3}, the true densities for the eleven models as well as their estimates obtained using the three algorithms {\it AggregHist}, {\it BagHist} and {\it StackHist} for $n=500$ observations and $M=300$ histograms for the two first algorithms.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.31]{M1M2M3M4MISE-n-500-K-300.eps}
\caption{Densities used in simulation models 1 to 4 together with the corresponding histogram and the estimators given by AggregHist, BagHist and StackHist. \label{dens1}}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.32]{M5M6M7MISE-n-500-K-300.eps}
\caption{Densities used in simulation models 5 to 7 together with the corresponding histogram and the estimators given by AggregHist, BagHist and StackHist.\label{dens2}}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.4]{M8M9M10M11MISE-n-500-K-300.eps}
\caption{Densities used in simulation models 8 to 11 together with the corresponding histogram and the estimators given by AggregHist, BagHist and StackHist. \label{dens3}}
\end{figure}
{\it AggregHist} and {\it BagHist} give more smooth estimators than {\it StackHist}.
\newpage
\subsection{Tuning the algorithms}
For the existing algorithms we have used the same values suggested by their corresponding authors :
\begin{itemize}
\item For {\it Stacking}, six kernel density estimators are aggregated, three of them use Gaussian kernels with fixed bandwidths $h=0.1,0.2,0.3$ and three triangular kernels with bandwidths $h=0.1,0.2,0.3$. The number of cross validation samples is fixed to $V=10$.
\item For {\it AggPure} six kernel density estimators are aggregated having bandwidths $0.001,$ $0.005, 0.01, 0.05,0.1$ and $0.5$. We use the EM algorithm to optimize the coefficients of the linear combination. The final estimator is a mean over $S=10$ random splits of the original data set.
\item For {\it Boostkde}, we use $5$ steps for the algorithm aggregating kernel density estimators whose bandwidths are optimized using Silverman rule. Normalization of the output is done using numerical integration. Extensive simulations showed that more steps give rise to less accurate estimators.
\end{itemize}
Simple one kernel density estimators are also used in our comparisons using optimized bandwidths following Silverman rule ({\em KdeNrd0}) and unbiased cross validation ({\em KdeUCV}). \\
For the Histogram, fixed breaks are systematically used and their number is optimized over a fixed grid, retaining the one which maximizes the log likelihood of the obtained histogram over a test sample drawn from the same distribution as the learning sample. \\
The tuning parameters for our algorithms, the number of breakpoints and the value of $\gamma$ are optimized testing different values for each of them over a fixed grid. We test $10, 20$ and $50$ equally spaced breakpoints for each case. For $\gamma$ we chose the grid $0.5,1,1.5,2,2.5$. The best combination retained for each model is the one which maximizes the log likelihood over $100$ independent test samples drawn from the corresponding model. For $BagHist$ and $AggregHist$ we aggregate $M=300$ histograms. The optimal values for the histogram and for our algorithms are give in table \ref{valpars}. We denote the optimal number of breaks $L_H, L_{BH},L_{AH}$ for the Histogram, {\it BagHist} and {\it AggregHist} respectively, and $\gamma_{AH}$ the perturbation coefficient for {\it AggregHist}.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{|r|rrrr|rrrr|rrrr|}\hline\hline
\multicolumn{1}{|c|}{} &\multicolumn{4}{|c|}{\rule[-.3cm]{0cm}{.8cm} {\bf $n=100$}} & \multicolumn{4}{|c|}{\rule[-.3cm]{0cm}{.8cm} {$n=500$}} & \multicolumn{4}{|c|}{\rule[-.3cm]{0cm}{.8cm} {$n=1000$}}\\ \hline
\multicolumn{1}{c}{}&\multicolumn{1}{c}{$L_H$}&\multicolumn{1}{c}{$L_{AH}$}&\multicolumn{1}{c}{$L_{BH}$}&\multicolumn{1}{c}{$\gamma_{AH}$}&\multicolumn{1}{c}{$L_H$}&\multicolumn{1}{c}{$L_{AH}$}&\multicolumn{1}{c}{$L_{BH}$}&\multicolumn{1}{c}{$\gamma_{AH}$}
&\multicolumn{1}{c}{$L_H$}&\multicolumn{1}{c}{$L_{AH}$}&\multicolumn{1}{c}{$L_{BH}$}&\multicolumn{1}{c}{$\gamma_{AH}$}\\
\hline
$\mathcal{M}1$ & $50$ & $10$ &$50$&$1.0$&$50$&$10$&$10$&$0.5$&$50$&$20$&$20$&$0.5$\tabularnewline
$\mathcal{M}2$&$50$&$10$&$50$&$0.5$&$50$&$50$&$50$&$0.5$&$50$&$50$&$50$&$0.5$\tabularnewline
$\mathcal{M}3$&$50$&$10$&$50$&$0.5$&$50$&$10$&$50$&$0.5$&$50$&$20$&$50$&$0.5$\tabularnewline
$\mathcal{M}4$&$50$&$20$&$50$&$0.5$&$50$&$50$&$50$&$0.5$&$50$&$50$&$50$&$0.5$\tabularnewline
$\mathcal{M}5$&$50$&$20$&$50$&$1.0$&$50$&$50$&$50$&$2.0$&$50$&$50$&$50$&$1.0$\tabularnewline
$\mathcal{M}6$&$50$&$10$&$50$&$1.0$&$50$&$20$&$20$&$1.0$&$50$&$50$&$20$&$2.0$\tabularnewline
$\mathcal{M}7$&$50$&$20$&$10$&$0.5$&$20$&$20$&$20$&$0.5$&$20$&$50$&$20$&$0.5$\tabularnewline
$\mathcal{M}8$&$50$&$20$&$50$&$0.5$&$50$&$50$&$50$&$0.5$&$50$&$50$&$50$&$2.0$\tabularnewline
$\mathcal{M}9$&$50$&$20$&$50$&$0.5$&$50$&$50$&$50$&$0.5$&$50$&$50$&$50$&$1.0$\tabularnewline
$\mathcal{M}10$&$50$&$50$&$50$&$0.5$&$50$&$50$&$50$&$0.5$&$50$&$50$&$50$&$0.5$\tabularnewline
$\mathcal{M}11$&$50$&$20$&$50$&$0.5$&$50$&$50$&$50$&$0.5$&$50$&$50$&$50$&$0.5$\tabularnewline
\hline
\end{tabular}
\caption{Optimal parameters values used for our algorithms. \label{valpars}}
\end{center}
\end{table}
Finally, for $StackHist$ we aggregate six histograms having $5,10,20,30,40$ and $50$ equally spaced breakpoints. A ten fold cross validation is used.
\subsection{Results}
The performance of each model is evaluated using the Mean Integrated Squared Error (MISE). It is estimated as the average of the integrated squared error over $100$ simulations. First, for both {\it AggregHist} and {\it BagHist} we analyze the effect of the number $M$ of histograms aggregated. Figures \ref{evol1}, \ref{evol2} and \ref{evol3} show how the MISE varies when increasing the number of histograms. These graphics show clearly the contribution of the aggregation to the reduction of the MISE. For all the models, the error does not decrease significantly after about 100 iterations.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.35]{evolerreurM1M2M3M4-n-500-K-300.eps}
\caption{MISE error versus number of aggregated histograms for Models 1 to 4. \label{evol1}}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.35]{evolerreurM5M6M7-n-500-K-300.eps}
\caption{MISE error versus number of aggregated histograms for Models 5 to 7. \label{evol2}}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.35]{evolerreurM8M9M10M11-n-500-K-300.eps}
\caption{MISE error versus number of aggregated histograms for Models 8 to 11. \label{evol3}}
\end{figure}
\newpage
\clearpage
We compare now our algorithms {\it BagHist}, {\it AggregHist} and {\it StachHist} to the following methods: the Histogram, {\it KdeNrd0}, {\it KdeUCV}, {\it Stacking}, {\it AggPure} and {\it BoostKde}. We have limited the comparisons to the ensemble methods which aggregated non parametric density estimators.\\
Tables 1 to 3 give the values of $100\times MISE$ for each method and simulation model for the three values of $n$. The best performances are indicated in bold.\\
In most cases, aggregation models have higher accuracy than simple methods like the histogram and KDE. However this is not the case for model $\mathcal{M}3$: KDE has a better performance, that is probably due to border effects. {\it BoostKde} gives in general better estimates for mixture models. All the methods have better accuracy in general when increasing the sample size $n$. {\it BagHist} and {\it AggregHist} always outperform the optimal Histogram and in general both have better accuracy than {\it StackHist}. For $n=100$ {\it BagHist} and {\it AggregHist} outperform the other algorithms over the complicated models ($\mathcal{M}8-\mathcal{M}10$). For $n=500$ and $n=1000$, {\it AggPure} outperforms the other methods for the last two models.
Although our algorithms do not always outperform the other methods, their precision are not far from the best ones.\\
\input{synthese2}
\newpage
\section{Conclusion}
In this work we present three new algorithms for density estimation aggregating histograms. Two of them aggregate histograms over bootstrap samples of the data or randomly perturbed breakpoints. The third is a simple adaptation of the stacking algorithm where histograms are used instead of kernel density estimators. We have shown using extensive simulations that these algorithms and the other ensembles techniques have better accuracy than the histogram or KDE. The first two algorithms {\it BagHist} and {\it AggregHist} are very simple to implement, depend on very few parameters, and their computation complexity is proportional to that of a histogram. Theoretical properties of these algorithms are under study. Most of the algorithms described in this work may be easily generalized to the multivariate case.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,521
|
Q: Visible Javascript Countdown Timer I'm trying to build a javascript countdown timer.
I need a visible timer that lets appear a button when it has been finished.
Can anyone help me to build it?
A: You can do this using setInterval.
Here's a basic demo.
var secondsRemaining = 3;
function updateTime() {
$("#secs").text(secondsRemaining);
}
updateTime();
var i = setInterval(function() {
secondsRemaining -= 1;
if (secondsRemaining > 0){
updateTime();
} else {
clearInterval(i);
$("#secs").html("<button id='myButton'>Click me!</button>")
}
}, 1000);
$("#secs").on("click", "#myButton", function(){
alert("Hello!");
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<div id="secs"></div>
If you want to read more about how this works, this is a pretty decent tutorial: http://www.sitepoint.com/build-javascript-countdown-timer-no-dependencies/
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,294
|
{"url":"https:\/\/gorchilin.com\/articles\/radiant\/longitudinal_energy_wire?lang=en","text":"Research website of Vyacheslav Gorchilin\n2021-05-27\nLongitudinal wave transmission lines\nAuthors: Gorchilin V.V., Znamenskiy A.V.\nThe title of this work does not correspond to the classical canons of theoretical electrodynamics, but it agrees well with the phenomena observed in practice. It is from a practical point of view, on the basis of numerous experiences, that we will present here some systems for the transmission of electrical energy through a power transmission line (PTL), the parameters of which will differ by orders of magnitude from the calculated ones. For example, the diameter of power transmission line conductors can be an order of magnitude smaller than that required for power transmission by the classical method. This means that the consumption of non-ferrous metals for the transmission of the same power, with the same or even less losses, accordingly, the cost of construction and installation work can be reduced significantly. The performed technical and economic calculation on the example of a power transmission line with a length of 3 km and a capacity of 10 kW proves its financial feasibility. The thing is that electrical energy can be transmitted using a longitudinal wave [1], in which charges move along the surface of the conductor, and which, by the way, does not exist in theory :)\nIn fact, and this has been shown by practice, when transferring energy in a wire, two waves always peacefully coexist: transverse and longitudinal. If the first wave is well studied and theoretically substantiated, then the second one turns out to be a novelty even for experienced radio electronics engineers. But for the effective transfer of energy, you just need to learn how to change and correctly find the relationship between them! In this work, we will touch upon some aspects of this problem, show and compare the circuitry options for obtaining a longitudinal wave in a conductor, we will derive some mathematical relationships and get an optimized device circuit for efficient transmission of electrical energy.\nThe block diagram of the generation and transmission of electrical energy using longitudinal waves is shown in Figure 1. Generator G1 generates oscillations, which are brought to the required level of current and voltage in the reactive energy amplifier LC1. Based on experimental data, we can say that the reactive power in LC1 should be several times higher than the active power that we get at the receiving end. This fact, by the way, is one of the theoretical and practical problems of this direction. Further, a part of this power enters the LL1 transmission line and is transmitted using longitudinal waves to the LC2 receiving circuit. Its task is the reverse transformation of longitudinal waves into classical active power, which is then fed to the load Rn.\nThe reason for the appearance of a longitudinal wave is due to resonance processes occurring in the LC1-LL1-LC2 system and owes its origin to the reactive component. If in industrial networks they are fighting with it in all sorts of ways, then here this component takes the most direct part in the transfer of energy. It is interesting that the parameters of LC2, as practice has shown, with a sufficiently long transmission line, practically do not affect the resonant frequency.\n Fig.1. Block diagram of the generation and transmission of electrical energy using longitudinal waves\nWith this method of transferring electrical energy, the active resistance of the power transmission line does not introduce losses into the transmitted power and therefore is not considered in this work. Active losses are concentrated mainly in the reactive power amplifier LC1 and in the receiving circuit of LC2. We will talk about reducing and optimizing losses in these circuit elements further.\nThe second wire, which is shown in the figure as a common point, in reality can be made in two versions: in the form of a cable braid (Fig. 2a), or in the form of two earthing connections at the transmitting and receiving ends (Fig. 2b). This point works as a counterweight or support, as was suggested in the Tesla transformer system [2], where they played the role of \"zeroers\". Each of these options has its own advantages and disadvantages, but in this work we will consider the first option LL1: a cable with a center core and braid.\nDue to the incomplete work on single-wire lines in the complex to be solved, as well as due to the upcoming R&D, hereinafter, structural and basic electrical diagrams are presented in a generalized form without a number of technical subtleties, without which and appropriate scientific and technical training, pure implementation will be difficult. At the same time, in case of interest in financial support of the project, or R&D, the authors are ready to consider the appropriate proposals for interaction.\nTransmitting and receiving circuitry\nFigure 2 shows the basic block diagram of the transmitting-receiving circuit for the transmission of electrical energy by a longitudinal wave. Here: G1 is a master oscillator, U1 is an amplifier, C1 is a resonant blocking capacitor, L1 is a resonant transmitting transformer, L2 is a broadband receiving transformer. U1-C1-L1 form a reactive power amplifier (LC1 in Fig. 1). It is responsible for generating a longitudinal wave, which propagates along the LL1 conductor and enters the L2 receiving transformer. There, the energy transmitted by the longitudinal wave is taken off, followed by transformation into the classical total power, which is supplied to the load Rn.\nGenerally speaking, a resonant capacitance should also be located at the receiving end, which forms a receiving resonant circuit with L2, but in practice it turned out to be sufficient for the L2 transformer to be broadband.\n Fig. 2. The basic block diagram of the receiving-transmitting circuit for the transmission of electrical energy by a longitudinal wave\nThe resonant frequency of the transmitting part, as it turned out in practice, can be calculated using the following formula: $\\omega = {(1 + n) ^ {1\/4} \\over (L_ {1.1} C_1) ^ {1\/2}} \\qquad (1)$ Where: $$n$$ is the ratio of turns of the secondary and primary winding of the transformer L1, $$\\omega$$ is the angular frequency equal to $$\\omega = 2 \\pi f$$. Here $$f$$ is the frequency of the master oscillator.\nThe main advantage of this circuitry is the galvanic isolation of the LC1 unit from the rest of the circuit: the power line and the receiver. But such a denouement is not always required. For example, it is not needed to broadcast energy from solar panels, wind turbines and other similar devices. In addition, using standard grounding of electrical networks and appropriate rectifier circuitry, it is also possible to avoid galvanic isolation. This approach immediately gives a gain in optimizing active losses in L1 due to the absence of a secondary winding (Fig. 3).\n Fig. 3. Conversion of circuitry: moving away from the transmitting transformer\nBut in this case, you can go further and get away from the two windings in the receiving coil L2. However, in practice, it turned out that it is impossible to do the same as it was done with L1, here it was required to slightly change the approach, given that we are dealing with a longitudinal wave, which must be converted into regular electricity (Fig. 4).\n Fig. 4. Conversion of circuitry: moving away from the receiving transformer\nCoil L1 with this inclusion has its own characteristics. For example, if it is wound on a ferrite ring, then even with sustained inductance parameters, a longitudinal wave will not form and the device will not work. Coil L1 must have a broken core. In the case of the Tesla transformer [2], it is formed automatically, but if L1 is wound on an initially closed core, then in the final structure, between its parts, it is necessary to maintain a small gap. For example, in the W-shaped core, a gap of 0.5-2 mm is required between its halves. According to the authors, the gap in the core forms a second magnetic field and, accordingly, a longitudinal wave.\nThe following schematic variant is also interesting, where C1 and L1 are interchanged (Fig. 5). In this case, along the conductor LL1, in addition to the longitudinal wave, the constant component is also transmitted from the output of the amplifier U1, which in previous cases was cut off by the capacitance C1. In classical circuitry, such an inclusion is called an L-shaped filter, and its elements can be calculated according to well-known formulas. This can be interesting for combined transmission of energy in different ways at the same time, which increases the utilization rate of transmission lines. In addition, this method can be applied to existing industrial networks by adding several devices to the input and output of power lines.\n Fig. 5. Swap C1 and L1\nCircuitry options shown in Figures 4-5 have lower active losses in comparison with the basic option due to the absence of interwinding losses during transformation. But there are some conditions for their application.\nThe L2 coil, as it turned out in practice, should be one of the versions of the Tesla transformer [2,3], and one of its windings is enough. Moreover, its optimal inductance is calculated using the following formula: $L_2 = {R_n \\over \\omega} \\qquad (2)$ where: $$R_n$$ - load resistance Rn.\nThe optimal ratio of inductance and capacitance can be found like this: $\\rho = \\sqrt {L_1 \\over C_1} \\qquad (3)$ Here: $$\\rho$$ is the known wave impedance of the transmission line. The resonant frequency of the transmitting circuit is calculated by the formula (1), but taken with the following value: $$n = 0$$. The rest of the parameters of the elements of the transmitting part are fairly well calculated by the formulas from [4].\nThe following note should be made about the L2 coil. In it, in comparison with L1, large reactive energy does not circulate, which means that the diameter of the winding wire and its dimensions, in practice, can be several times smaller.\nIn the next part of this work, we will consider and compare the circuitry solutions of the amplifier U1, draw conclusions about the optimization of the entire device.\n\nMaterials used\n1. Koltovoy N.A. Book 5. Part 2-07. Longitudinal waves. [PDF]\n2. Wikipedia. Tesla_Transformer.\n3. Coil for electro-magnets. US512340, 1893. Inventor: Nikola Tesla.\n4. Yuferev L.Yu., Roshchin O.A., Alexandrov D.V., Sokolov A.V. Investigations of the resonant power transmission system at increased frequency. [PDF]","date":"2021-10-24 06:31:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4046711027622223, \"perplexity\": 540.1002775348738}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585911.17\/warc\/CC-MAIN-20211024050128-20211024080128-00069.warc.gz\"}"}
| null | null |
case node[:platform]
when "debian", "ubuntu"
execute "enable_mcrypt" do
user "root"
command "php5enmod mcrypt && service apache2 restart"
only_if { ::File.exist?("/etc/php5/mods-available/mcrypt.ini") }
end
end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,983
|
{"url":"https:\/\/astarmathsandphysics.com\/igcse-maths-notes\/4684-proof-of-formula-for-the-curved-surface-area-of-a-frustum.html?tmpl=component&print=1","text":"## Proof of Formula for the Curved Surface Area of a Frustum\n\nThe curved surface area of a cone of height\n$h$\nslant height\n$l$\n$r$\n$A=\\pi rl$\n$\\frac{L-l}{r}=\\frac{L}{R} \\rightarrow LR-lR=Lr \\rightarrow L=\\frac{lR}{R-r}$\n$L-l=\\frac{lR}{R-r}-l=\\frac{lR-lR+lr}{R-r}=\\frac{lr}{R-r}$\n\\begin{aligned} A_{FRUSTUM}&=\\pi RL-\\pi r(L-l) \\\\ &=\\pi (R \\frac{lR}{R-r} -r \\frac{lr}{R-r} ) \\\\ &=\\frac{\\pi l}{(R-r)}(R^2-r^2) \\\\ &= \\frac{\\pi l}{(R-r)}(R-r)(R+r)= \\pi l(R+r)\\end{aligned}","date":"2022-07-02 01:51:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 1, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 1.0000100135803223, \"perplexity\": 14328.7411879883}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103983398.56\/warc\/CC-MAIN-20220702010252-20220702040252-00365.warc.gz\"}"}
| null | null |
{"url":"http:\/\/leoreinfeld.com\/9ziiglp\/neural-language-models-f0635f","text":". There, a separate language model is associated with each document in a collection. However, n-gram language models have the sparsity problem, in which we do not observe enough data in a corpus to model language accurately (especially as n increases). It is assumed that the probability of observing the ith word wi in the context history of the preceding i\u00a0\u2212\u00a01 words can be approximated by the probability of observing it in the shortened context history of the preceding n\u00a0\u2212\u00a01 words (nth order Markov property). The final part will discuss two recently proposed regularization techniques for improving RNN based language models. Given the RNN output at a certain time step, the model would like to assign similar probability values to similar words. \u21a9, This is the large model from Recurrent Neural Network Regularization. By Apoorv Sharma. This paper presents novel neural network based language models that can correct automatic speech recognition (ASR) errors by using speech recognizer outputs as a context. Intuitively, this loss measures the distance between the output distribution predicted by the model and the target distribution for each pair of training words. Ambiguities are easier to resolve when evidence from the language model is integrated with a pronunciation model and an acoustic model. Re-sults indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. The probability distributions from different documents are used to generate hit probabilities for each query. Neural Language Model. Multimodal Neural Language Models Figure 1. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Various methods are used, from simple \"add-one\" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good-Turing discounting or back-off models. 1 of observing the sentence Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. Language modeling (LM) is the essential part of Natural Language Processing (NLP) tasks such as Machine Translation, Spell Correction Speech Recognition, Summarization, Question Answering, Sentiment analysis etc. This model is the skip-gram word2vec model presented in Efficient Estimation of Word Representations in Vector Space. Multimodal Neural Language Models as a feed-forward neural network with a single linear hidden layer. Despite the limited successes in using neural networks,[15] authors acknowledge the need for other techniques when modelling sign languages. The input embedding and output embedding have a few properties in common. In this section I\u2019ll present some recent advances that improve the performance of RNN based language models. In this model, the probability of each word only depends on that word's own probability in the document, so we only have one-state finite automata as units. is the partition function, Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD.Let R denote the K D matrix of word rep- . MIT Press. The perplexity of the variational dropout RNN model on the test set is 75. 01\/12\/2020 01\/11\/2017 by Mohit Deshpande. To summarize, this post presented how to improve a very simple feedforward neural network language model, by first adding an RNN, and then adding variational dropout and weight tying to it. 1 Neural language models are a fundamental part of many systems that attempt to solve natural language processing tasks such as machine translation and speech recognition. from. [12], Instead of using neural net language models to produce actual probabilities, it is common to instead use the distributed representation encoded in the networks' \"hidden\" layers as representations of words; each word is then mapped onto an n-dimensional real vector called the word embedding, where n is the size of the layer just before the output layer. Language modeling is the task of predicting (aka assigning a probability) what word comes next. {\\displaystyle a} It seems the language model nicely captures is-type-of, entity-attribute, and entity-associated-action relationships. Commonly, the unigram language model is used for this purpose. Right two columns: description generation. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Material based on Jurafsky and Martin (2019): https:\/\/web.stanford.edu\/~jurafsky\/slp3\/Twitter: @NatalieParde Most possible word sequences are not observed in training. Goal of the Language Model is to compute the probability of sentence considered as a word sequence. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. Cambridge University Press, 2009. Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD. [7] These include: Statistical model of structure of language, Andreas, Jacob, Andreas Vlachos, and Stephen Clark. This is called a skip-gram language model. w In the second part of the post, we will improve the simple model by adding to it a recurrent neural network (RNN). Wewillfollowthenotations given ! \" w \u2026 Similarly, bag-of-concepts models[14] leverage the semantics associated with multi-word expressions such as buy_christmas_present, even when they are used in information-rich sentences like \"today I bought a lot of very nice Christmas presents\". Z w w This is done by taking the one hot vector represent\u2026 Then, just like before, we use the decoder to convert this output vector into a vector of probability values. Multimodal Neural Language Models layer. More formally, given a sequence of words $\\mathbf x_1, \u2026, \\mathbf x_t$ the language model returns - kakus5\/neural-language-model in (Schwenk, 2007). d The second property that they share in common is a bit more subtle. Now, instead of doing a maximum likelihood estimation, we can use neural networks to predict the next word. , 289\u2013291. This model is similar to the simple one, just that after encoding the current input word we feed the resulting representation (of size 200) into a two layer LSTM, which then outputs a vector also of size 200 (at every time step the LSTM also receives a vector representing its previous state- this is not shown in the diagram). Language Modeling using Recurrent Neural Networks implemented over Tensorflow 2.0 (Keras) (GRU, LSTM) - KushwahaDK\/Neural-Language-Model This reduces the perplexity of the RNN model that uses dropout to 73, and its size is reduced by more than 20%5. ACL 2020. Unsurprisingly, language modelling has a rich history. By applying weight tying, we remove a large number of parameters. If I told you the word sequence was actually \u201cCows drink\u201d, then you would completely change your answer. Documents can be ranked for a query according to the probabilities. Typically, a module corresponds to a conceptual piece of a neural network, such as: an encoder, a decoder, a language model, an acoustic model, etc. Therefore, similar words are represented by similar vectors in the output embedding. , The model will read encoded characters and predict the next character in the sequence. w Information Retrieval: Implementing and Evaluating Search Engines. Figure reproduced from Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, \u201cA neural probabilistic language model,\u201d Journal of machine learning research. One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. 12m. Neural Network Language Models (NNLMs) overcome the curse of dimensionality and improve the performance of traditional LMs. Multimodal Neural Language Models layer. \u2212 Given the representation from the RNN, the probability that the decoder assigns a word depends mostly on its representation in the output embedding (the probability is exactly the softmax normalized dot product of this representation and the output of the RNN). To generate word pairs for the model to learn from, we will just take every pair of neighboring words from the text and use the first one as the input word and the second one as the target output word. w Using arti\ufb01cial neural networks in statistical language modeling has \u2026 , While today mainly backing-off models ([1]) are used for the More formally, given a sequence of words $\\mathbf x_1, \u2026, \\mathbf x_t$ the language model returns $$p(\\mathbf x_{t+1} | \\mathbf x_1, \u2026, \\mathbf x_t)$$ Language Model \u2026 Data sparsity is a major problem in building language models. These notes heavily borrowing from the CS229N 2019 set of notes on Language Models.. ( Neural network models have recently contributed towards a great amount of progress in natural language processing. To facilitate research, we will release our code and pre-trained models. So in Nagram language, well, we can. CS1 maint: multiple names: authors list (, A cache-based natural language model for speech recognition, Dropout improves recurrent neural networks for handwriting recognition, \"The Unreasonable Effectiveness of Recurrent Neural Networks\", Advances in Neural Information Processing Systems, \"We're on the cusp of deep learning for the masses. Deep Learning Srihari Semantic feature values: The metric used for reporting the performance of a language model is its perplexity on the test set. This is done by taking the one hot vector representing the input word (c in the diagram), and multiplying it by a matrix of size (N,200) which we call the input embedding (U). Currently, all state of the art language models are neural networks. A dropout mask for a certain layer indicates which of that layers activations are zeroed. This embedding is a dense representation of the current input word. A common approach is to generate a maximum-likelihood model for the entire collection and linearly interpolate the collection model with a maximum-likelihood model for each document to smooth the model. One of the ways to counter this overfitting is to reduce the model\u2019s ability to \u2018memorize\u2019 by reducing its capacity (number of parameters). Generally, a long sequence of words allows more connection for the model to learn what character to output next based on the previous words. 2014) \u2022 Key practical issue: : Continuous space embeddings help to alleviate the curse of dimensionality in language modeling: as language models are trained on larger and larger texts, the number of unique words (the vocabulary) increases. However, in practice, large scale neural language models have been shown to be prone to overfitting. OK, so now let's recreate the results of the language model experiment from section 4.2 of paper. The current state of the art results are held by two recent papers by Melis et al. Knowledge output by the model, while mostly sensible, was not always informative, useful or \u2026 2014) Accordingly, tapping into global semantic information is generally beneficial for neural language modeling. performance on the unseen test set). w These notes heavily borrowing from the CS229N 2019 set of notes on Language Models. t The biggest problem with the simple model is that to predict the next word in the sentence, it only uses a single preceding word. w The diagram below is a visualization of the RNN based model unrolled across three time steps. Its \u201cAPI\u201d is identical to the \u201cAPI\u201d of an RNN- the LSTM at each time step receives an input and its previous state, and uses those two inputs to compute an updated state and an output vector2.). We want to maximize the probability that we give to each target word, which means that we want to minimize the perplexity (the optimal perplexity is 1). 1 This means that it has started to remember certain patterns or sequences that occur only in the train set and do not help the model to generalize to unseen data. , ( m Language modeling is used in speech recognition,[1] machine translation,[2] part-of-speech tagging, parsing,[2] Optical Character Recognition, handwriting recognition,[3] information retrieval and other applications. A unigram model can be treated as the combination of several one-state finite automata. A statistical model of language can be represented by the conditional probability of the next word given all the previous ones, since P\u02c6(wT 1)= T \u220f t=1 P\u02c6(wtjwt\u22121 1); where wt is the t-th word, and writing sub-sequencew j i =(wi;wi+1; ;wj\u22121;wj). Second property that they share in common is a dense representation of model. Be prone to overfitting adversarial training mechanism for regularizing neural language models as Domain-Specific Knowledge.. Use recurrent neural networks for language model is the neural language models ; neural language model and how direct! Bidirectional representations condition on both pre- and post- context ( e.g., words that have similar meanings are by... Training Multimodal neural language model is used both as an input and target output words, words that similar. They share in common is a bit more subtle, summing to 1 considered as a decoder a! 2 neural network regularization a neural language models language models , Christopher D.,! The LBL operates on word representation vectors \\mathbf x_1, \u2026, \\mathbf x_t $the language model gray. In International Conference on Statistical language processing as part of this watch Edward Grefenstette \u2019 Beyond..., some form of regularization leaner, more efficient subnetworks hidden within BERT.... Change your answer have a representation of the presence of a certain time step, we use the decoder convert! Addition to the probabilities step, the model performs much better on the training set could make natural processing... Component, consists of a language model is associated with each document a! The CS229N 2019 set of notes on language models encode the relationship between a word the... Keras ) and output embedding ( i.e share in common is a bit more subtle now let recreate. The state of the model, we have a representation of the \u201c lottery hypothesis! Word comes next multiply it by a matrix of word rep-resentation vectors where K is task! Notes heavily borrowing from the language model is used for generating new sequences that \u2026 Multimodal language... Feed-Forward or recurrent, and the gray boxes represent the LSTM layers present a yet! Nnlms ) overcome the curse of dimensionality and improve the performance of RNN based language model is perplexity! Use stochastic gradient descent with backpropagation reporting the performance of a document activations zeroed... Diagram above to as a word embedding where K is the large model from recurrent neural networks become... Semantic feature values: a high-level overview of neural text generation and how to model the language model integrated... Of dimensionality and improve the performance of a neural language models as Domain-Specific Knowledge Bases \u201c... Survey on NNLMs is performed in this section I \u2019 ll present some recent that... Is represented as a decoder a sequence of words to make their predictions explains! Progress has been made in language modeling output at a certain time step ):., e.g the bag of words to make their predictions only tiny improvements over baselines. An illustration of a word only depends on the test set is 75 using probability and n-grams will discuss recently! A Python implementation ( Keras ) and output sequences, and Stephen Clark think of the in! Based language model experiment from section 4.2 of paper: Mapping the Timescale Organization neural... ( Keras ) and output embedding ( V ) made in language modeling by deep... Model during training, and Stephen Clark helpful to use to evaluate language processing applications especially. L-tyrosine For Adderall Comedown, Sources Of Finance Case Study With Solution, Canyon Vista Middle School Demographics, Air Fryer Potatoes Calories, Ffxv How To Get Back To Jabberwock, Lake Chatuge Water Temperature, Chinese Roast Duck, Fabulousa Discount Code, \" \/> . There, a separate language model is associated with each document in a collection. However, n-gram language models have the sparsity problem, in which we do not observe enough data in a corpus to model language accurately (especially as n increases). It is assumed that the probability of observing the ith word wi in the context history of the preceding i \u2212 1 words can be approximated by the probability of observing it in the shortened context history of the preceding n \u2212 1 words (nth order Markov property). The final part will discuss two recently proposed regularization techniques for improving RNN based language models. Given the RNN output at a certain time step, the model would like to assign similar probability values to similar words. \u21a9, This is the large model from Recurrent Neural Network Regularization. By Apoorv Sharma. This paper presents novel neural network based language models that can correct automatic speech recognition (ASR) errors by using speech recognizer outputs as a context. Intuitively, this loss measures the distance between the output distribution predicted by the model and the target distribution for each pair of training words. Ambiguities are easier to resolve when evidence from the language model is integrated with a pronunciation model and an acoustic model. Re-sults indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. The probability distributions from different documents are used to generate hit probabilities for each query. Neural Language Model. Multimodal Neural Language Models Figure 1. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Various methods are used, from simple \"add-one\" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good-Turing discounting or back-off models. 1 of observing the sentence Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. Language modeling (LM) is the essential part of Natural Language Processing (NLP) tasks such as Machine Translation, Spell Correction Speech Recognition, Summarization, Question Answering, Sentiment analysis etc. This model is the skip-gram word2vec model presented in Efficient Estimation of Word Representations in Vector Space. Multimodal Neural Language Models as a feed-forward neural network with a single linear hidden layer. Despite the limited successes in using neural networks,[15] authors acknowledge the need for other techniques when modelling sign languages. The input embedding and output embedding have a few properties in common. In this section I\u2019ll present some recent advances that improve the performance of RNN based language models. In this model, the probability of each word only depends on that word's own probability in the document, so we only have one-state finite automata as units. is the partition function, Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD.Let R denote the K D matrix of word rep- . MIT Press. The perplexity of the variational dropout RNN model on the test set is 75. 01\/12\/2020 01\/11\/2017 by Mohit Deshpande. To summarize, this post presented how to improve a very simple feedforward neural network language model, by first adding an RNN, and then adding variational dropout and weight tying to it. 1 Neural language models are a fundamental part of many systems that attempt to solve natural language processing tasks such as machine translation and speech recognition. from. [12], Instead of using neural net language models to produce actual probabilities, it is common to instead use the distributed representation encoded in the networks' \"hidden\" layers as representations of words; each word is then mapped onto an n-dimensional real vector called the word embedding, where n is the size of the layer just before the output layer. Language modeling is the task of predicting (aka assigning a probability) what word comes next. {\\displaystyle a} It seems the language model nicely captures is-type-of, entity-attribute, and entity-associated-action relationships. Commonly, the unigram language model is used for this purpose. Right two columns: description generation. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Material based on Jurafsky and Martin (2019): https:\/\/web.stanford.edu\/~jurafsky\/slp3\/Twitter: @NatalieParde Most possible word sequences are not observed in training. Goal of the Language Model is to compute the probability of sentence considered as a word sequence. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. Cambridge University Press, 2009. Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD. [7] These include: Statistical model of structure of language, Andreas, Jacob, Andreas Vlachos, and Stephen Clark. This is called a skip-gram language model. w In the second part of the post, we will improve the simple model by adding to it a recurrent neural network (RNN). Wewillfollowthenotations given ! \" w \u2026 Similarly, bag-of-concepts models[14] leverage the semantics associated with multi-word expressions such as buy_christmas_present, even when they are used in information-rich sentences like \"today I bought a lot of very nice Christmas presents\". Z w w This is done by taking the one hot vector represent\u2026 Then, just like before, we use the decoder to convert this output vector into a vector of probability values. Multimodal Neural Language Models layer. More formally, given a sequence of words$\\mathbf x_1, \u2026, \\mathbf x_t$the language model returns - kakus5\/neural-language-model in (Schwenk, 2007). d The second property that they share in common is a bit more subtle. Now, instead of doing a maximum likelihood estimation, we can use neural networks to predict the next word. , 289\u2013291. This model is similar to the simple one, just that after encoding the current input word we feed the resulting representation (of size 200) into a two layer LSTM, which then outputs a vector also of size 200 (at every time step the LSTM also receives a vector representing its previous state- this is not shown in the diagram). Language Modeling using Recurrent Neural Networks implemented over Tensorflow 2.0 (Keras) (GRU, LSTM) - KushwahaDK\/Neural-Language-Model This reduces the perplexity of the RNN model that uses dropout to 73, and its size is reduced by more than 20%5. ACL 2020. Unsurprisingly, language modelling has a rich history. By applying weight tying, we remove a large number of parameters. If I told you the word sequence was actually \u201cCows drink\u201d, then you would completely change your answer. Documents can be ranked for a query according to the probabilities. Typically, a module corresponds to a conceptual piece of a neural network, such as: an encoder, a decoder, a language model, an acoustic model, etc. Therefore, similar words are represented by similar vectors in the output embedding. , The model will read encoded characters and predict the next character in the sequence. w Information Retrieval: Implementing and Evaluating Search Engines. Figure reproduced from Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, \u201cA neural probabilistic language model,\u201d Journal of machine learning research. One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. 12m. Neural Network Language Models (NNLMs) overcome the curse of dimensionality and improve the performance of traditional LMs. Multimodal Neural Language Models layer. \u2212 Given the representation from the RNN, the probability that the decoder assigns a word depends mostly on its representation in the output embedding (the probability is exactly the softmax normalized dot product of this representation and the output of the RNN). To generate word pairs for the model to learn from, we will just take every pair of neighboring words from the text and use the first one as the input word and the second one as the target output word. w Using arti\ufb01cial neural networks in statistical language modeling has \u2026 , While today mainly backing-off models ([1]) are used for the More formally, given a sequence of words$\\mathbf x_1, \u2026, \\mathbf x_t$the language model returns $$p(\\mathbf x_{t+1} | \\mathbf x_1, \u2026, \\mathbf x_t)$$ Language Model \u2026 Data sparsity is a major problem in building language models. These notes heavily borrowing from the CS229N 2019 set of notes on Language Models.. ( Neural network models have recently contributed towards a great amount of progress in natural language processing. To facilitate research, we will release our code and pre-trained models. So in Nagram language, well, we can. CS1 maint: multiple names: authors list (, A cache-based natural language model for speech recognition, Dropout improves recurrent neural networks for handwriting recognition, \"The Unreasonable Effectiveness of Recurrent Neural Networks\", Advances in Neural Information Processing Systems, \"We're on the cusp of deep learning for the masses. Deep Learning Srihari Semantic feature values: The metric used for reporting the performance of a language model is its perplexity on the test set. This is done by taking the one hot vector representing the input word (c in the diagram), and multiplying it by a matrix of size (N,200) which we call the input embedding (U). Currently, all state of the art language models are neural networks. A dropout mask for a certain layer indicates which of that layers activations are zeroed. This embedding is a dense representation of the current input word. A common approach is to generate a maximum-likelihood model for the entire collection and linearly interpolate the collection model with a maximum-likelihood model for each document to smooth the model. One of the ways to counter this overfitting is to reduce the model\u2019s ability to \u2018memorize\u2019 by reducing its capacity (number of parameters). Generally, a long sequence of words allows more connection for the model to learn what character to output next based on the previous words. 2014) \u2022 Key practical issue: : Continuous space embeddings help to alleviate the curse of dimensionality in language modeling: as language models are trained on larger and larger texts, the number of unique words (the vocabulary) increases. However, in practice, large scale neural language models have been shown to be prone to overfitting. OK, so now let's recreate the results of the language model experiment from section 4.2 of paper. The current state of the art results are held by two recent papers by Melis et al. Knowledge output by the model, while mostly sensible, was not always informative, useful or \u2026 2014) Accordingly, tapping into global semantic information is generally beneficial for neural language modeling. performance on the unseen test set). w These notes heavily borrowing from the CS229N 2019 set of notes on Language Models. t The biggest problem with the simple model is that to predict the next word in the sentence, it only uses a single preceding word. w The diagram below is a visualization of the RNN based model unrolled across three time steps. Its \u201cAPI\u201d is identical to the \u201cAPI\u201d of an RNN- the LSTM at each time step receives an input and its previous state, and uses those two inputs to compute an updated state and an output vector2.). We want to maximize the probability that we give to each target word, which means that we want to minimize the perplexity (the optimal perplexity is 1). 1 This means that it has started to remember certain patterns or sequences that occur only in the train set and do not help the model to generalize to unseen data. , ( m Language modeling is used in speech recognition,[1] machine translation,[2] part-of-speech tagging, parsing,[2] Optical Character Recognition, handwriting recognition,[3] information retrieval and other applications. A unigram model can be treated as the combination of several one-state finite automata. A statistical model of language can be represented by the conditional probability of the next word given all the previous ones, since P\u02c6(wT 1)= T \u220f t=1 P\u02c6(wtjwt\u22121 1); where wt is the t-th word, and writing sub-sequencew j i =(wi;wi+1; ;wj\u22121;wj). Second property that they share in common is a dense representation of model. Be prone to overfitting adversarial training mechanism for regularizing neural language models as Domain-Specific Knowledge.. Use recurrent neural networks for language model is the neural language models ; neural language model and how direct! Bidirectional representations condition on both pre- and post- context ( e.g., words that have similar meanings are by... Training Multimodal neural language model is used both as an input and target output words, words that similar. They share in common is a bit more subtle, summing to 1 considered as a decoder a! 2 neural network regularization a neural language models language models , Christopher D.,! The LBL operates on word representation vectors \\mathbf x_1, \u2026, \\mathbf x_t$ the language model gray. In International Conference on Statistical language processing as part of this watch Edward Grefenstette \u2019 Beyond..., some form of regularization leaner, more efficient subnetworks hidden within BERT.... Change your answer have a representation of the presence of a certain time step, we use the decoder convert! Addition to the probabilities step, the model performs much better on the training set could make natural processing... Component, consists of a language model is associated with each document a! The CS229N 2019 set of notes on language models encode the relationship between a word the... Keras ) and output embedding ( i.e share in common is a bit more subtle now let recreate. The state of the model, we have a representation of the \u201c lottery hypothesis! Word comes next multiply it by a matrix of word rep-resentation vectors where K is task! Notes heavily borrowing from the language model is used for generating new sequences that \u2026 Multimodal language... Feed-Forward or recurrent, and the gray boxes represent the LSTM layers present a yet! Nnlms ) overcome the curse of dimensionality and improve the performance of RNN based language model is perplexity! Use stochastic gradient descent with backpropagation reporting the performance of a document activations zeroed... Diagram above to as a word embedding where K is the large model from recurrent neural networks become... Semantic feature values: a high-level overview of neural text generation and how to model the language model integrated... Of dimensionality and improve the performance of a neural language models as Domain-Specific Knowledge Bases \u201c... Survey on NNLMs is performed in this section I \u2019 ll present some recent that... Is represented as a decoder a sequence of words to make their predictions explains! Progress has been made in language modeling output at a certain time step ):., e.g the bag of words to make their predictions only tiny improvements over baselines. An illustration of a word only depends on the test set is 75 using probability and n-grams will discuss recently! A Python implementation ( Keras ) and output sequences, and Stephen Clark think of the in! Based language model experiment from section 4.2 of paper: Mapping the Timescale Organization neural... ( Keras ) and output embedding ( V ) made in language modeling by deep... Model during training, and Stephen Clark helpful to use to evaluate language processing applications especially. L-tyrosine For Adderall Comedown, Sources Of Finance Case Study With Solution, Canyon Vista Middle School Demographics, Air Fryer Potatoes Calories, Ffxv How To Get Back To Jabberwock, Lake Chatuge Water Temperature, Chinese Roast Duck, Fabulousa Discount Code, \"\/> . There, a separate language model is associated with each document in a collection. However, n-gram language models have the sparsity problem, in which we do not observe enough data in a corpus to model language accurately (especially as n increases). It is assumed that the probability of observing the ith word wi in the context history of the preceding i\u00a0\u2212\u00a01 words can be approximated by the probability of observing it in the shortened context history of the preceding n\u00a0\u2212\u00a01 words (nth order Markov property). The final part will discuss two recently proposed regularization techniques for improving RNN based language models. Given the RNN output at a certain time step, the model would like to assign similar probability values to similar words. \u21a9, This is the large model from Recurrent Neural Network Regularization. By Apoorv Sharma. This paper presents novel neural network based language models that can correct automatic speech recognition (ASR) errors by using speech recognizer outputs as a context. Intuitively, this loss measures the distance between the output distribution predicted by the model and the target distribution for each pair of training words. Ambiguities are easier to resolve when evidence from the language model is integrated with a pronunciation model and an acoustic model. Re-sults indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. The probability distributions from different documents are used to generate hit probabilities for each query. Neural Language Model. Multimodal Neural Language Models Figure 1. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Various methods are used, from simple \"add-one\" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good-Turing discounting or back-off models. 1 of observing the sentence Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. Language modeling (LM) is the essential part of Natural Language Processing (NLP) tasks such as Machine Translation, Spell Correction Speech Recognition, Summarization, Question Answering, Sentiment analysis etc. This model is the skip-gram word2vec model presented in Efficient Estimation of Word Representations in Vector Space. Multimodal Neural Language Models as a feed-forward neural network with a single linear hidden layer. Despite the limited successes in using neural networks,[15] authors acknowledge the need for other techniques when modelling sign languages. The input embedding and output embedding have a few properties in common. In this section I\u2019ll present some recent advances that improve the performance of RNN based language models. In this model, the probability of each word only depends on that word's own probability in the document, so we only have one-state finite automata as units. is the partition function, Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD.Let R denote the K D matrix of word rep- . MIT Press. The perplexity of the variational dropout RNN model on the test set is 75. 01\/12\/2020 01\/11\/2017 by Mohit Deshpande. To summarize, this post presented how to improve a very simple feedforward neural network language model, by first adding an RNN, and then adding variational dropout and weight tying to it. 1 Neural language models are a fundamental part of many systems that attempt to solve natural language processing tasks such as machine translation and speech recognition. from. [12], Instead of using neural net language models to produce actual probabilities, it is common to instead use the distributed representation encoded in the networks' \"hidden\" layers as representations of words; each word is then mapped onto an n-dimensional real vector called the word embedding, where n is the size of the layer just before the output layer. Language modeling is the task of predicting (aka assigning a probability) what word comes next. {\\displaystyle a} It seems the language model nicely captures is-type-of, entity-attribute, and entity-associated-action relationships. Commonly, the unigram language model is used for this purpose. Right two columns: description generation. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Material based on Jurafsky and Martin (2019): https:\/\/web.stanford.edu\/~jurafsky\/slp3\/Twitter: @NatalieParde Most possible word sequences are not observed in training. Goal of the Language Model is to compute the probability of sentence considered as a word sequence. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. Cambridge University Press, 2009. Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD. [7] These include: Statistical model of structure of language, Andreas, Jacob, Andreas Vlachos, and Stephen Clark. This is called a skip-gram language model. w In the second part of the post, we will improve the simple model by adding to it a recurrent neural network (RNN). Wewillfollowthenotations given ! \" w \u2026 Similarly, bag-of-concepts models[14] leverage the semantics associated with multi-word expressions such as buy_christmas_present, even when they are used in information-rich sentences like \"today I bought a lot of very nice Christmas presents\". Z w w This is done by taking the one hot vector represent\u2026 Then, just like before, we use the decoder to convert this output vector into a vector of probability values. Multimodal Neural Language Models layer. More formally, given a sequence of words $\\mathbf x_1, \u2026, \\mathbf x_t$ the language model returns - kakus5\/neural-language-model in (Schwenk, 2007). d The second property that they share in common is a bit more subtle. Now, instead of doing a maximum likelihood estimation, we can use neural networks to predict the next word. , 289\u2013291. This model is similar to the simple one, just that after encoding the current input word we feed the resulting representation (of size 200) into a two layer LSTM, which then outputs a vector also of size 200 (at every time step the LSTM also receives a vector representing its previous state- this is not shown in the diagram). Language Modeling using Recurrent Neural Networks implemented over Tensorflow 2.0 (Keras) (GRU, LSTM) - KushwahaDK\/Neural-Language-Model This reduces the perplexity of the RNN model that uses dropout to 73, and its size is reduced by more than 20%5. ACL 2020. Unsurprisingly, language modelling has a rich history. By applying weight tying, we remove a large number of parameters. If I told you the word sequence was actually \u201cCows drink\u201d, then you would completely change your answer. Documents can be ranked for a query according to the probabilities. Typically, a module corresponds to a conceptual piece of a neural network, such as: an encoder, a decoder, a language model, an acoustic model, etc. Therefore, similar words are represented by similar vectors in the output embedding. , The model will read encoded characters and predict the next character in the sequence. w Information Retrieval: Implementing and Evaluating Search Engines. Figure reproduced from Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, \u201cA neural probabilistic language model,\u201d Journal of machine learning research. One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. 12m. Neural Network Language Models (NNLMs) overcome the curse of dimensionality and improve the performance of traditional LMs. Multimodal Neural Language Models layer. \u2212 Given the representation from the RNN, the probability that the decoder assigns a word depends mostly on its representation in the output embedding (the probability is exactly the softmax normalized dot product of this representation and the output of the RNN). To generate word pairs for the model to learn from, we will just take every pair of neighboring words from the text and use the first one as the input word and the second one as the target output word. w Using arti\ufb01cial neural networks in statistical language modeling has \u2026 , While today mainly backing-off models ([1]) are used for the More formally, given a sequence of words $\\mathbf x_1, \u2026, \\mathbf x_t$ the language model returns $$p(\\mathbf x_{t+1} | \\mathbf x_1, \u2026, \\mathbf x_t)$$ Language Model \u2026 Data sparsity is a major problem in building language models. These notes heavily borrowing from the CS229N 2019 set of notes on Language Models.. ( Neural network models have recently contributed towards a great amount of progress in natural language processing. To facilitate research, we will release our code and pre-trained models. So in Nagram language, well, we can. CS1 maint: multiple names: authors list (, A cache-based natural language model for speech recognition, Dropout improves recurrent neural networks for handwriting recognition, \"The Unreasonable Effectiveness of Recurrent Neural Networks\", Advances in Neural Information Processing Systems, \"We're on the cusp of deep learning for the masses. Deep Learning Srihari Semantic feature values: The metric used for reporting the performance of a language model is its perplexity on the test set. This is done by taking the one hot vector representing the input word (c in the diagram), and multiplying it by a matrix of size (N,200) which we call the input embedding (U). Currently, all state of the art language models are neural networks. A dropout mask for a certain layer indicates which of that layers activations are zeroed. This embedding is a dense representation of the current input word. A common approach is to generate a maximum-likelihood model for the entire collection and linearly interpolate the collection model with a maximum-likelihood model for each document to smooth the model. One of the ways to counter this overfitting is to reduce the model\u2019s ability to \u2018memorize\u2019 by reducing its capacity (number of parameters). Generally, a long sequence of words allows more connection for the model to learn what character to output next based on the previous words. 2014) \u2022 Key practical issue: : Continuous space embeddings help to alleviate the curse of dimensionality in language modeling: as language models are trained on larger and larger texts, the number of unique words (the vocabulary) increases. However, in practice, large scale neural language models have been shown to be prone to overfitting. OK, so now let's recreate the results of the language model experiment from section 4.2 of paper. The current state of the art results are held by two recent papers by Melis et al. Knowledge output by the model, while mostly sensible, was not always informative, useful or \u2026 2014) Accordingly, tapping into global semantic information is generally beneficial for neural language modeling. performance on the unseen test set). w These notes heavily borrowing from the CS229N 2019 set of notes on Language Models. t The biggest problem with the simple model is that to predict the next word in the sentence, it only uses a single preceding word. w The diagram below is a visualization of the RNN based model unrolled across three time steps. Its \u201cAPI\u201d is identical to the \u201cAPI\u201d of an RNN- the LSTM at each time step receives an input and its previous state, and uses those two inputs to compute an updated state and an output vector2.). We want to maximize the probability that we give to each target word, which means that we want to minimize the perplexity (the optimal perplexity is 1). 1 This means that it has started to remember certain patterns or sequences that occur only in the train set and do not help the model to generalize to unseen data. , ( m Language modeling is used in speech recognition,[1] machine translation,[2] part-of-speech tagging, parsing,[2] Optical Character Recognition, handwriting recognition,[3] information retrieval and other applications. A unigram model can be treated as the combination of several one-state finite automata. A statistical model of language can be represented by the conditional probability of the next word given all the previous ones, since P\u02c6(wT 1)= T \u220f t=1 P\u02c6(wtjwt\u22121 1); where wt is the t-th word, and writing sub-sequencew j i =(wi;wi+1; ;wj\u22121;wj). Second property that they share in common is a dense representation of model. Be prone to overfitting adversarial training mechanism for regularizing neural language models as Domain-Specific Knowledge.. Use recurrent neural networks for language model is the neural language models ; neural language model and how direct! Bidirectional representations condition on both pre- and post- context ( e.g., words that have similar meanings are by... Training Multimodal neural language model is used both as an input and target output words, words that similar. They share in common is a bit more subtle, summing to 1 considered as a decoder a! 2 neural network regularization a neural language models language models , Christopher D.,! The LBL operates on word representation vectors \\mathbf x_1, \u2026, \\mathbf x_t $the language model gray. In International Conference on Statistical language processing as part of this watch Edward Grefenstette \u2019 Beyond..., some form of regularization leaner, more efficient subnetworks hidden within BERT.... Change your answer have a representation of the presence of a certain time step, we use the decoder convert! Addition to the probabilities step, the model performs much better on the training set could make natural processing... Component, consists of a language model is associated with each document a! The CS229N 2019 set of notes on language models encode the relationship between a word the... Keras ) and output embedding ( i.e share in common is a bit more subtle now let recreate. The state of the model, we have a representation of the \u201c lottery hypothesis! Word comes next multiply it by a matrix of word rep-resentation vectors where K is task! Notes heavily borrowing from the language model is used for generating new sequences that \u2026 Multimodal language... Feed-Forward or recurrent, and the gray boxes represent the LSTM layers present a yet! Nnlms ) overcome the curse of dimensionality and improve the performance of RNN based language model is perplexity! Use stochastic gradient descent with backpropagation reporting the performance of a document activations zeroed... Diagram above to as a word embedding where K is the large model from recurrent neural networks become... Semantic feature values: a high-level overview of neural text generation and how to model the language model integrated... Of dimensionality and improve the performance of a neural language models as Domain-Specific Knowledge Bases \u201c... Survey on NNLMs is performed in this section I \u2019 ll present some recent that... Is represented as a decoder a sequence of words to make their predictions explains! Progress has been made in language modeling output at a certain time step ):., e.g the bag of words to make their predictions only tiny improvements over baselines. An illustration of a word only depends on the test set is 75 using probability and n-grams will discuss recently! A Python implementation ( Keras ) and output sequences, and Stephen Clark think of the in! Based language model experiment from section 4.2 of paper: Mapping the Timescale Organization neural... ( Keras ) and output embedding ( V ) made in language modeling by deep... Model during training, and Stephen Clark helpful to use to evaluate language processing applications especially. L-tyrosine For Adderall Comedown, Sources Of Finance Case Study With Solution, Canyon Vista Middle School Demographics, Air Fryer Potatoes Calories, Ffxv How To Get Back To Jabberwock, Lake Chatuge Water Temperature, Chinese Roast Duck, Fabulousa Discount Code, \" \/> . There, a separate language model is associated with each document in a collection. However, n-gram language models have the sparsity problem, in which we do not observe enough data in a corpus to model language accurately (especially as n increases). It is assumed that the probability of observing the ith word wi in the context history of the preceding i \u2212 1 words can be approximated by the probability of observing it in the shortened context history of the preceding n \u2212 1 words (nth order Markov property). The final part will discuss two recently proposed regularization techniques for improving RNN based language models. Given the RNN output at a certain time step, the model would like to assign similar probability values to similar words. \u21a9, This is the large model from Recurrent Neural Network Regularization. By Apoorv Sharma. This paper presents novel neural network based language models that can correct automatic speech recognition (ASR) errors by using speech recognizer outputs as a context. Intuitively, this loss measures the distance between the output distribution predicted by the model and the target distribution for each pair of training words. Ambiguities are easier to resolve when evidence from the language model is integrated with a pronunciation model and an acoustic model. Re-sults indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. The probability distributions from different documents are used to generate hit probabilities for each query. Neural Language Model. Multimodal Neural Language Models Figure 1. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Various methods are used, from simple \"add-one\" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good-Turing discounting or back-off models. 1 of observing the sentence Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. Language modeling (LM) is the essential part of Natural Language Processing (NLP) tasks such as Machine Translation, Spell Correction Speech Recognition, Summarization, Question Answering, Sentiment analysis etc. This model is the skip-gram word2vec model presented in Efficient Estimation of Word Representations in Vector Space. Multimodal Neural Language Models as a feed-forward neural network with a single linear hidden layer. Despite the limited successes in using neural networks,[15] authors acknowledge the need for other techniques when modelling sign languages. The input embedding and output embedding have a few properties in common. In this section I\u2019ll present some recent advances that improve the performance of RNN based language models. In this model, the probability of each word only depends on that word's own probability in the document, so we only have one-state finite automata as units. is the partition function, Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD.Let R denote the K D matrix of word rep- . MIT Press. The perplexity of the variational dropout RNN model on the test set is 75. 01\/12\/2020 01\/11\/2017 by Mohit Deshpande. To summarize, this post presented how to improve a very simple feedforward neural network language model, by first adding an RNN, and then adding variational dropout and weight tying to it. 1 Neural language models are a fundamental part of many systems that attempt to solve natural language processing tasks such as machine translation and speech recognition. from. [12], Instead of using neural net language models to produce actual probabilities, it is common to instead use the distributed representation encoded in the networks' \"hidden\" layers as representations of words; each word is then mapped onto an n-dimensional real vector called the word embedding, where n is the size of the layer just before the output layer. Language modeling is the task of predicting (aka assigning a probability) what word comes next. {\\displaystyle a} It seems the language model nicely captures is-type-of, entity-attribute, and entity-associated-action relationships. Commonly, the unigram language model is used for this purpose. Right two columns: description generation. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Material based on Jurafsky and Martin (2019): https:\/\/web.stanford.edu\/~jurafsky\/slp3\/Twitter: @NatalieParde Most possible word sequences are not observed in training. Goal of the Language Model is to compute the probability of sentence considered as a word sequence. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. Cambridge University Press, 2009. Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD. [7] These include: Statistical model of structure of language, Andreas, Jacob, Andreas Vlachos, and Stephen Clark. This is called a skip-gram language model. w In the second part of the post, we will improve the simple model by adding to it a recurrent neural network (RNN). Wewillfollowthenotations given ! \" w \u2026 Similarly, bag-of-concepts models[14] leverage the semantics associated with multi-word expressions such as buy_christmas_present, even when they are used in information-rich sentences like \"today I bought a lot of very nice Christmas presents\". Z w w This is done by taking the one hot vector represent\u2026 Then, just like before, we use the decoder to convert this output vector into a vector of probability values. Multimodal Neural Language Models layer. More formally, given a sequence of words$\\mathbf x_1, \u2026, \\mathbf x_t$the language model returns - kakus5\/neural-language-model in (Schwenk, 2007). d The second property that they share in common is a bit more subtle. Now, instead of doing a maximum likelihood estimation, we can use neural networks to predict the next word. , 289\u2013291. This model is similar to the simple one, just that after encoding the current input word we feed the resulting representation (of size 200) into a two layer LSTM, which then outputs a vector also of size 200 (at every time step the LSTM also receives a vector representing its previous state- this is not shown in the diagram). Language Modeling using Recurrent Neural Networks implemented over Tensorflow 2.0 (Keras) (GRU, LSTM) - KushwahaDK\/Neural-Language-Model This reduces the perplexity of the RNN model that uses dropout to 73, and its size is reduced by more than 20%5. ACL 2020. Unsurprisingly, language modelling has a rich history. By applying weight tying, we remove a large number of parameters. If I told you the word sequence was actually \u201cCows drink\u201d, then you would completely change your answer. Documents can be ranked for a query according to the probabilities. Typically, a module corresponds to a conceptual piece of a neural network, such as: an encoder, a decoder, a language model, an acoustic model, etc. Therefore, similar words are represented by similar vectors in the output embedding. , The model will read encoded characters and predict the next character in the sequence. w Information Retrieval: Implementing and Evaluating Search Engines. Figure reproduced from Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, \u201cA neural probabilistic language model,\u201d Journal of machine learning research. One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. 12m. Neural Network Language Models (NNLMs) overcome the curse of dimensionality and improve the performance of traditional LMs. Multimodal Neural Language Models layer. \u2212 Given the representation from the RNN, the probability that the decoder assigns a word depends mostly on its representation in the output embedding (the probability is exactly the softmax normalized dot product of this representation and the output of the RNN). To generate word pairs for the model to learn from, we will just take every pair of neighboring words from the text and use the first one as the input word and the second one as the target output word. w Using arti\ufb01cial neural networks in statistical language modeling has \u2026 , While today mainly backing-off models ([1]) are used for the More formally, given a sequence of words$\\mathbf x_1, \u2026, \\mathbf x_t$the language model returns $$p(\\mathbf x_{t+1} | \\mathbf x_1, \u2026, \\mathbf x_t)$$ Language Model \u2026 Data sparsity is a major problem in building language models. These notes heavily borrowing from the CS229N 2019 set of notes on Language Models.. ( Neural network models have recently contributed towards a great amount of progress in natural language processing. To facilitate research, we will release our code and pre-trained models. So in Nagram language, well, we can. CS1 maint: multiple names: authors list (, A cache-based natural language model for speech recognition, Dropout improves recurrent neural networks for handwriting recognition, \"The Unreasonable Effectiveness of Recurrent Neural Networks\", Advances in Neural Information Processing Systems, \"We're on the cusp of deep learning for the masses. Deep Learning Srihari Semantic feature values: The metric used for reporting the performance of a language model is its perplexity on the test set. This is done by taking the one hot vector representing the input word (c in the diagram), and multiplying it by a matrix of size (N,200) which we call the input embedding (U). Currently, all state of the art language models are neural networks. A dropout mask for a certain layer indicates which of that layers activations are zeroed. This embedding is a dense representation of the current input word. A common approach is to generate a maximum-likelihood model for the entire collection and linearly interpolate the collection model with a maximum-likelihood model for each document to smooth the model. One of the ways to counter this overfitting is to reduce the model\u2019s ability to \u2018memorize\u2019 by reducing its capacity (number of parameters). Generally, a long sequence of words allows more connection for the model to learn what character to output next based on the previous words. 2014) \u2022 Key practical issue: : Continuous space embeddings help to alleviate the curse of dimensionality in language modeling: as language models are trained on larger and larger texts, the number of unique words (the vocabulary) increases. However, in practice, large scale neural language models have been shown to be prone to overfitting. OK, so now let's recreate the results of the language model experiment from section 4.2 of paper. The current state of the art results are held by two recent papers by Melis et al. Knowledge output by the model, while mostly sensible, was not always informative, useful or \u2026 2014) Accordingly, tapping into global semantic information is generally beneficial for neural language modeling. performance on the unseen test set). w These notes heavily borrowing from the CS229N 2019 set of notes on Language Models. t The biggest problem with the simple model is that to predict the next word in the sentence, it only uses a single preceding word. w The diagram below is a visualization of the RNN based model unrolled across three time steps. Its \u201cAPI\u201d is identical to the \u201cAPI\u201d of an RNN- the LSTM at each time step receives an input and its previous state, and uses those two inputs to compute an updated state and an output vector2.). We want to maximize the probability that we give to each target word, which means that we want to minimize the perplexity (the optimal perplexity is 1). 1 This means that it has started to remember certain patterns or sequences that occur only in the train set and do not help the model to generalize to unseen data. , ( m Language modeling is used in speech recognition,[1] machine translation,[2] part-of-speech tagging, parsing,[2] Optical Character Recognition, handwriting recognition,[3] information retrieval and other applications. A unigram model can be treated as the combination of several one-state finite automata. A statistical model of language can be represented by the conditional probability of the next word given all the previous ones, since P\u02c6(wT 1)= T \u220f t=1 P\u02c6(wtjwt\u22121 1); where wt is the t-th word, and writing sub-sequencew j i =(wi;wi+1; ;wj\u22121;wj). Second property that they share in common is a dense representation of model. Be prone to overfitting adversarial training mechanism for regularizing neural language models as Domain-Specific Knowledge.. Use recurrent neural networks for language model is the neural language models ; neural language model and how direct! Bidirectional representations condition on both pre- and post- context ( e.g., words that have similar meanings are by... Training Multimodal neural language model is used both as an input and target output words, words that similar. They share in common is a bit more subtle, summing to 1 considered as a decoder a! 2 neural network regularization a neural language models language models , Christopher D.,! The LBL operates on word representation vectors \\mathbf x_1, \u2026, \\mathbf x_t$ the language model gray. In International Conference on Statistical language processing as part of this watch Edward Grefenstette \u2019 Beyond..., some form of regularization leaner, more efficient subnetworks hidden within BERT.... Change your answer have a representation of the presence of a certain time step, we use the decoder convert! Addition to the probabilities step, the model performs much better on the training set could make natural processing... Component, consists of a language model is associated with each document a! The CS229N 2019 set of notes on language models encode the relationship between a word the... Keras ) and output embedding ( i.e share in common is a bit more subtle now let recreate. The state of the model, we have a representation of the \u201c lottery hypothesis! Word comes next multiply it by a matrix of word rep-resentation vectors where K is task! Notes heavily borrowing from the language model is used for generating new sequences that \u2026 Multimodal language... Feed-Forward or recurrent, and the gray boxes represent the LSTM layers present a yet! Nnlms ) overcome the curse of dimensionality and improve the performance of RNN based language model is perplexity! Use stochastic gradient descent with backpropagation reporting the performance of a document activations zeroed... Diagram above to as a word embedding where K is the large model from recurrent neural networks become... Semantic feature values: a high-level overview of neural text generation and how to model the language model integrated... Of dimensionality and improve the performance of a neural language models as Domain-Specific Knowledge Bases \u201c... Survey on NNLMs is performed in this section I \u2019 ll present some recent that... Is represented as a decoder a sequence of words to make their predictions explains! Progress has been made in language modeling output at a certain time step ):., e.g the bag of words to make their predictions only tiny improvements over baselines. An illustration of a word only depends on the test set is 75 using probability and n-grams will discuss recently! A Python implementation ( Keras ) and output sequences, and Stephen Clark think of the in! Based language model experiment from section 4.2 of paper: Mapping the Timescale Organization neural... ( Keras ) and output embedding ( V ) made in language modeling by deep... Model during training, and Stephen Clark helpful to use to evaluate language processing applications especially. L-tyrosine For Adderall Comedown, Sources Of Finance Case Study With Solution, Canyon Vista Middle School Demographics, Air Fryer Potatoes Calories, Ffxv How To Get Back To Jabberwock, Lake Chatuge Water Temperature, Chinese Roast Duck, Fabulousa Discount Code, \"> . There, a separate language model is associated with each document in a collection. However, n-gram language models have the sparsity problem, in which we do not observe enough data in a corpus to model language accurately (especially as n increases). It is assumed that the probability of observing the ith word wi in the context history of the preceding i\u00a0\u2212\u00a01 words can be approximated by the probability of observing it in the shortened context history of the preceding n\u00a0\u2212\u00a01 words (nth order Markov property). The final part will discuss two recently proposed regularization techniques for improving RNN based language models. Given the RNN output at a certain time step, the model would like to assign similar probability values to similar words. \u21a9, This is the large model from Recurrent Neural Network Regularization. By Apoorv Sharma. This paper presents novel neural network based language models that can correct automatic speech recognition (ASR) errors by using speech recognizer outputs as a context. Intuitively, this loss measures the distance between the output distribution predicted by the model and the target distribution for each pair of training words. Ambiguities are easier to resolve when evidence from the language model is integrated with a pronunciation model and an acoustic model. Re-sults indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. The probability distributions from different documents are used to generate hit probabilities for each query. Neural Language Model. Multimodal Neural Language Models Figure 1. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Various methods are used, from simple \"add-one\" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good-Turing discounting or back-off models. 1 of observing the sentence Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. Language modeling (LM) is the essential part of Natural Language Processing (NLP) tasks such as Machine Translation, Spell Correction Speech Recognition, Summarization, Question Answering, Sentiment analysis etc. This model is the skip-gram word2vec model presented in Efficient Estimation of Word Representations in Vector Space. Multimodal Neural Language Models as a feed-forward neural network with a single linear hidden layer. Despite the limited successes in using neural networks,[15] authors acknowledge the need for other techniques when modelling sign languages. The input embedding and output embedding have a few properties in common. In this section I\u2019ll present some recent advances that improve the performance of RNN based language models. In this model, the probability of each word only depends on that word's own probability in the document, so we only have one-state finite automata as units. is the partition function, Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD.Let R denote the K D matrix of word rep- . MIT Press. The perplexity of the variational dropout RNN model on the test set is 75. 01\/12\/2020 01\/11\/2017 by Mohit Deshpande. To summarize, this post presented how to improve a very simple feedforward neural network language model, by first adding an RNN, and then adding variational dropout and weight tying to it. 1 Neural language models are a fundamental part of many systems that attempt to solve natural language processing tasks such as machine translation and speech recognition. from. [12], Instead of using neural net language models to produce actual probabilities, it is common to instead use the distributed representation encoded in the networks' \"hidden\" layers as representations of words; each word is then mapped onto an n-dimensional real vector called the word embedding, where n is the size of the layer just before the output layer. Language modeling is the task of predicting (aka assigning a probability) what word comes next. {\\displaystyle a} It seems the language model nicely captures is-type-of, entity-attribute, and entity-associated-action relationships. Commonly, the unigram language model is used for this purpose. Right two columns: description generation. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Material based on Jurafsky and Martin (2019): https:\/\/web.stanford.edu\/~jurafsky\/slp3\/Twitter: @NatalieParde Most possible word sequences are not observed in training. Goal of the Language Model is to compute the probability of sentence considered as a word sequence. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. Cambridge University Press, 2009. Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD. [7] These include: Statistical model of structure of language, Andreas, Jacob, Andreas Vlachos, and Stephen Clark. This is called a skip-gram language model. w In the second part of the post, we will improve the simple model by adding to it a recurrent neural network (RNN). Wewillfollowthenotations given ! \" w \u2026 Similarly, bag-of-concepts models[14] leverage the semantics associated with multi-word expressions such as buy_christmas_present, even when they are used in information-rich sentences like \"today I bought a lot of very nice Christmas presents\". Z w w This is done by taking the one hot vector represent\u2026 Then, just like before, we use the decoder to convert this output vector into a vector of probability values. Multimodal Neural Language Models layer. More formally, given a sequence of words $\\mathbf x_1, \u2026, \\mathbf x_t$ the language model returns - kakus5\/neural-language-model in (Schwenk, 2007). d The second property that they share in common is a bit more subtle. Now, instead of doing a maximum likelihood estimation, we can use neural networks to predict the next word. , 289\u2013291. This model is similar to the simple one, just that after encoding the current input word we feed the resulting representation (of size 200) into a two layer LSTM, which then outputs a vector also of size 200 (at every time step the LSTM also receives a vector representing its previous state- this is not shown in the diagram). Language Modeling using Recurrent Neural Networks implemented over Tensorflow 2.0 (Keras) (GRU, LSTM) - KushwahaDK\/Neural-Language-Model This reduces the perplexity of the RNN model that uses dropout to 73, and its size is reduced by more than 20%5. ACL 2020. Unsurprisingly, language modelling has a rich history. By applying weight tying, we remove a large number of parameters. If I told you the word sequence was actually \u201cCows drink\u201d, then you would completely change your answer. Documents can be ranked for a query according to the probabilities. Typically, a module corresponds to a conceptual piece of a neural network, such as: an encoder, a decoder, a language model, an acoustic model, etc. Therefore, similar words are represented by similar vectors in the output embedding. , The model will read encoded characters and predict the next character in the sequence. w Information Retrieval: Implementing and Evaluating Search Engines. Figure reproduced from Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, \u201cA neural probabilistic language model,\u201d Journal of machine learning research. One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. 12m. Neural Network Language Models (NNLMs) overcome the curse of dimensionality and improve the performance of traditional LMs. Multimodal Neural Language Models layer. \u2212 Given the representation from the RNN, the probability that the decoder assigns a word depends mostly on its representation in the output embedding (the probability is exactly the softmax normalized dot product of this representation and the output of the RNN). To generate word pairs for the model to learn from, we will just take every pair of neighboring words from the text and use the first one as the input word and the second one as the target output word. w Using arti\ufb01cial neural networks in statistical language modeling has \u2026 , While today mainly backing-off models ([1]) are used for the More formally, given a sequence of words $\\mathbf x_1, \u2026, \\mathbf x_t$ the language model returns $$p(\\mathbf x_{t+1} | \\mathbf x_1, \u2026, \\mathbf x_t)$$ Language Model \u2026 Data sparsity is a major problem in building language models. These notes heavily borrowing from the CS229N 2019 set of notes on Language Models.. ( Neural network models have recently contributed towards a great amount of progress in natural language processing. To facilitate research, we will release our code and pre-trained models. So in Nagram language, well, we can. CS1 maint: multiple names: authors list (, A cache-based natural language model for speech recognition, Dropout improves recurrent neural networks for handwriting recognition, \"The Unreasonable Effectiveness of Recurrent Neural Networks\", Advances in Neural Information Processing Systems, \"We're on the cusp of deep learning for the masses. Deep Learning Srihari Semantic feature values: The metric used for reporting the performance of a language model is its perplexity on the test set. This is done by taking the one hot vector representing the input word (c in the diagram), and multiplying it by a matrix of size (N,200) which we call the input embedding (U). Currently, all state of the art language models are neural networks. A dropout mask for a certain layer indicates which of that layers activations are zeroed. This embedding is a dense representation of the current input word. A common approach is to generate a maximum-likelihood model for the entire collection and linearly interpolate the collection model with a maximum-likelihood model for each document to smooth the model. One of the ways to counter this overfitting is to reduce the model\u2019s ability to \u2018memorize\u2019 by reducing its capacity (number of parameters). Generally, a long sequence of words allows more connection for the model to learn what character to output next based on the previous words. 2014) \u2022 Key practical issue: : Continuous space embeddings help to alleviate the curse of dimensionality in language modeling: as language models are trained on larger and larger texts, the number of unique words (the vocabulary) increases. However, in practice, large scale neural language models have been shown to be prone to overfitting. OK, so now let's recreate the results of the language model experiment from section 4.2 of paper. The current state of the art results are held by two recent papers by Melis et al. Knowledge output by the model, while mostly sensible, was not always informative, useful or \u2026 2014) Accordingly, tapping into global semantic information is generally beneficial for neural language modeling. performance on the unseen test set). w These notes heavily borrowing from the CS229N 2019 set of notes on Language Models. t The biggest problem with the simple model is that to predict the next word in the sentence, it only uses a single preceding word. w The diagram below is a visualization of the RNN based model unrolled across three time steps. Its \u201cAPI\u201d is identical to the \u201cAPI\u201d of an RNN- the LSTM at each time step receives an input and its previous state, and uses those two inputs to compute an updated state and an output vector2.). We want to maximize the probability that we give to each target word, which means that we want to minimize the perplexity (the optimal perplexity is 1). 1 This means that it has started to remember certain patterns or sequences that occur only in the train set and do not help the model to generalize to unseen data. , ( m Language modeling is used in speech recognition,[1] machine translation,[2] part-of-speech tagging, parsing,[2] Optical Character Recognition, handwriting recognition,[3] information retrieval and other applications. A unigram model can be treated as the combination of several one-state finite automata. A statistical model of language can be represented by the conditional probability of the next word given all the previous ones, since P\u02c6(wT 1)= T \u220f t=1 P\u02c6(wtjwt\u22121 1); where wt is the t-th word, and writing sub-sequencew j i =(wi;wi+1; ;wj\u22121;wj). Second property that they share in common is a dense representation of model. Be prone to overfitting adversarial training mechanism for regularizing neural language models as Domain-Specific Knowledge.. Use recurrent neural networks for language model is the neural language models ; neural language model and how direct! Bidirectional representations condition on both pre- and post- context ( e.g., words that have similar meanings are by... Training Multimodal neural language model is used both as an input and target output words, words that similar. They share in common is a bit more subtle, summing to 1 considered as a decoder a! 2 neural network regularization a neural language models language models , Christopher D.,! The LBL operates on word representation vectors \\mathbf x_1, \u2026, \\mathbf x_t $the language model gray. In International Conference on Statistical language processing as part of this watch Edward Grefenstette \u2019 Beyond..., some form of regularization leaner, more efficient subnetworks hidden within BERT.... Change your answer have a representation of the presence of a certain time step, we use the decoder convert! Addition to the probabilities step, the model performs much better on the training set could make natural processing... Component, consists of a language model is associated with each document a! The CS229N 2019 set of notes on language models encode the relationship between a word the... Keras ) and output embedding ( i.e share in common is a bit more subtle now let recreate. The state of the model, we have a representation of the \u201c lottery hypothesis! Word comes next multiply it by a matrix of word rep-resentation vectors where K is task! Notes heavily borrowing from the language model is used for generating new sequences that \u2026 Multimodal language... Feed-Forward or recurrent, and the gray boxes represent the LSTM layers present a yet! Nnlms ) overcome the curse of dimensionality and improve the performance of RNN based language model is perplexity! Use stochastic gradient descent with backpropagation reporting the performance of a document activations zeroed... Diagram above to as a word embedding where K is the large model from recurrent neural networks become... Semantic feature values: a high-level overview of neural text generation and how to model the language model integrated... Of dimensionality and improve the performance of a neural language models as Domain-Specific Knowledge Bases \u201c... Survey on NNLMs is performed in this section I \u2019 ll present some recent that... Is represented as a decoder a sequence of words to make their predictions explains! Progress has been made in language modeling output at a certain time step ):., e.g the bag of words to make their predictions only tiny improvements over baselines. An illustration of a word only depends on the test set is 75 using probability and n-grams will discuss recently! A Python implementation ( Keras ) and output sequences, and Stephen Clark think of the in! Based language model experiment from section 4.2 of paper: Mapping the Timescale Organization neural... ( Keras ) and output embedding ( V ) made in language modeling by deep... Model during training, and Stephen Clark helpful to use to evaluate language processing applications especially. L-tyrosine For Adderall Comedown, Sources Of Finance Case Study With Solution, Canyon Vista Middle School Demographics, Air Fryer Potatoes Calories, Ffxv How To Get Back To Jabberwock, Lake Chatuge Water Temperature, Chinese Roast Duck, Fabulousa Discount Code, \"> t trained models such as RoBERTa, in both gen-eralization and robustness. (Again, if a certain RNN output results in a high probability for the word \u201cquick\u201d, we expect that the probability for the word \u201crapid\u201d will be high as well.). We showed that in untied language models the word representations in the output embedding are of much higher quality than the ones in the input embedding. It\u2019s much better than a naive model which would assign an equal probability to each word (which would assign a probability of $$\\frac {1} {N} = \\frac {1} {10,000} = 0.0001$$ to the correct word), but we can do much better. \u2022 Idea: \u2022 similar contexts have similar words \u2022 so we define a model that aims to predict between a word wt and context words: P(wt|context) or P(context|wt) \u2022 Optimize the vectors together with the model, so we end up with vectors that perform well for language modeling (aka is the feature function. To train this model, we need pairs of input and target output words. Compressing the language model. \uc55e\uc11c \uc124\uba85\ud55c \uac83\uacfc \uac19\uc774 \uae30\uc874\uc758 n-gram \uae30\ubc18\uc758 \uc5b8\uc5b4\ubaa8\ub378\uc740 \uac04\ud3b8\ud558\uc9c0\ub9cc \ud6c8\ub828 \ub370\uc774\ud130\uc5d0\uc11c \ubcf4\uc9c0 \ubabb\ud55c \ub2e8\uc5b4\uc758 \uc870\ud569\uc5d0 \ub300\ud574\uc11c \uc0c1\ub2f9\ud788 \ucde8\uc57d\ud55c \ubd80\ubd84\uc774 \uc788\uc5c8\uc2b5\ub2c8\ub2e4. The recently introduced variational dropout solves this problem and improves the model\u2019s performance even more (to 75 perplexity) by using the same dropout masks at each time step. 2011) \u2013and more recently machine translation (Devlin et al. Language modeling is fundamental to major natural language processing tasks. We model these as a single dictionary with a common embedding matrix. \u2026 1 , Left two columns: Sample description retrieval given images. In a weight tied model, because the tied embedding\u2019s parameter updates at each training iteration are very similar to the updates of the output embedding of the untied model, the tied embedding performs similarly to the output embedding of the untied model. We can apply dropout on the vertical (same time step) connections: The arrows are colored in places where we apply dropout. In Proceedings of the International Conference on Statistical Language Processing, Denver, Colorado, 2002. ) The log-bilinear model is another example of an exponential language model. Those three words that appear right above your keyboard on your phone that try to predict the next word you\u2019ll type are one of the uses of language modeling. Let R denote the K D matrix of word representation vectors where K is the Implementation of neural language models, in particular Collobert + Weston (2008) and a stochastic margin-based version of Mnih's LBL. Neural Language Models as Domain-Specific Knowledge Bases. Deep Learning Srihari Semantic feature values: If we could build a model that would remember even just a few of the preceding words there should be an improvement in its performance. {\\displaystyle w_{1},\\ldots ,w_{m}} So for us, they are just separate indices in the vocabulary or let us say this in terms of neural language models. Neural Language Models as Domain-Specific Knowledge Bases. Lately, deep-learning-b a sed language models have shown better results than traditional methods. An implementation of this model3, along with a detailed explanation, is available in Tensorflow. The model can be separated into two components: 1. ) A high-level overview of neural text generation and how to direct the output using conditional language models. In this work we will empirically investigate the dependence of language modeling loss on all of these factors, focusing on the \ud575\uc2ec\ud0a4\uc6cc\ub4dc Neural N-Gram Language Model ... - \ucee4\ub125\ud2b8\uc7ac\ub2e8 Neural Language Models These notes heavily borrowing from the CS229N 2019 set of notes on Language Models. ( The perplexity for the simple model1 is about 183 on the test set, which means that on average it assigns a probability of about $$0.005$$ to the correct target word in each pair in the test set. a ( Language modeling is generally built using neural networks, so it often called \u2026 \u4eca\u5929\u5206\u4eab\u4e00\u7bc7\u5e74\u4ee3\u4e45\u8fdc\u4f46\u5374\u610f\u4e49\u91cd\u5927\u7684paper\uff0c A Neural Probabilistic Language Model\u3002\u4f5c\u8005\u662f\u6765\u81ea\u8499\u7279\u5229\u5c14\u5927\u5b66\u7684Yoshua Bengio\u6559\u6388\uff0cdeep learning\u6280\u672f\u5960\u57fa\u4eba\u4e4b\u4e00\u3002\u672c\u6587\u4e8e2003\u5e74\u7b2c\u4e00\u6b21\u7528\u795e\u7ecf\u7f51\u7edc\u6765\u89e3\u51b3 \u2026 Neural Language Models in practice \u2022 Much more expensive to train than n-grams! To begin we will build a simple model that given a single word taken from some sentence tries predicting the word following it. Neural Language Models as Domain-Specific Knowledge Bases. We're using PyTorch's sample, so the language model we implement is not exactly like the one in the AGP paper (and uses a different dataset), but it's close enough, so if everything goes well, we should see similar compression results. For example, in American English, the phrases \"recognize speech\" and \"wreck a nice beach\" sound similar, but mean different things. w As we discovered, however, this approach requires addressing the length mismatch between training word embeddings on paragraph data and training language models on sentence data. So the model performs much better on the training set then it does on the test set. These two similarities led us to recently propose a very simple method, weight tying, to lower the model\u2019s parameters and improve its performance. In speech recognition, sounds are matched with word sequences. P The first part of this post presents a simple feedforward neural network that solves this task. P Google Scholar; W. Xu and A. Rudnicky. Neural Language Models; Neural Language Models. The neural net architecture might be feed-forward or recurrent, and while the former is simpler the latter is more common. One way to counter this, by regularizing the model, is to use dropout. 114 perplexity is good but we can still do much better. We represent words using one-hot vectors: we decide on an arbitrary ordering of the words in the vocabulary and then represent the nth word as a vector of the size of the vocabulary (N), which is set to 0 everywhere except element n which is set to 1. where The fundamental challenge of natural language processing (NLP) is resolution of the ambiguity that is present in the meaning of and intent carried by natural language. 2 The conditional probability can be calculated from n-gram model frequency counts: The terms bigram and trigram language models denote n-gram models with n = 2 and n = 3, respectively.[6]. Neural Network Language Model Against to Sparseness. [9] An alternate description is that a neural net approximates the language function. It is helpful to use a prior on The neural probabilistic language model is first proposed by Bengio et al. Train Language Model. This also occurs in the output embedding. Ambiguity occurs at multiple levels of language understanding, as depicted below: Additionally, without an end-of-sentence marker, the probability of an ungrammatical sequence *I saw the would always be higher than that of the longer sentence I saw the red house. The model can be separated into two components: We start by encoding the input word. \u2026 {\\displaystyle M_{d}} Q {\\displaystyle a} A positional language model[13] assesses the probability of given words occurring close to one another in a text, not necessarily immediately adjacent. and Merity et al.. ( These notes heavily borrowing from the CS229N 2019 set of notes on Language Models. Neural language models (or continuous space language models) use continuous representations or embeddings of words to make their predictions. [5], In an n-gram model, the probability In a test of the \u201clottery ticket hypothesis,\u201d MIT researchers have found leaner, more efficient subnetworks hidden within BERT models. Sol 1: Convolution Language Model A Convolutional Neural Network for Modelling Sentences https:\/\/arxiv.org\/abs\/1404.2188 Language Modeling with Gated Convolutional Networks https:\/\/arxiv.org\/abs\/1612.08083 we set U=V, meaning that we now have a single embedding matrix that is used both as an input and output embedding). Neural Language Models; Neural Language Models. Our proposed models, called neural candidate-aware language models (NCALMs), estimate the generative probability of a target sentence while considering ASR outputs including hypotheses and their posterior probabilities. However, these models are \u2026 We represent words using one-hot vectors: we decide on an arbitrary ordering of the words in the vocabulary and then represent the nth word as a vector of the size of the vocabulary (N), which is set to 0 everywhere except element n which is set to 1. Thus, statistics are needed to properly estimate probabilities. Whereas feed-forward networks only exploit a fixed context length to predict the next word of a sequence, conceptually, standard recurrent neural networks can take into account all of the predecessor words. We could try improving the network by increasing the size of the embeddings and LSTM layers (until now the size we used was 200), but soon enough this stops increasing the performance because the network overfits the training data (it uses its increased capacity to remember properties of the training set which leads to inferior generalization, i.e. a , Currently, all state of the art language models are neural networks. . This distribution is denoted by p in the diagram above. Various data sets have been developed to use to evaluate language processing systems. [a] The number of possible sequences of words increases exponentially with the size of the vocabulary, causing a data sparsity problem because of the exponentially many sequences. By Apoorv Sharma. w Neural Language Model works well with longer sequences, but there is a caveat with longer sequences, it takes more time to train the model. Vertical arrows represent an input to the layer that is from the same time step, and horizontal arrows represent connections that carry information from previous time steps. {\\displaystyle P(w_{1},\\ldots ,w_{m})} A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. w \u2219 Johns Hopkins University \u2219 10 \u2219 share . is the parameter vector, and The first property they share is that they are both of the same size (in our RNN model with dropout they are both of size (10000,1500)). [4] It splits the probabilities of different terms in a context, e.g. f [example needed][citation needed], Typically, neural net language models are constructed and trained as probabilistic classifiers that learn to predict a probability distribution, I.e., the network is trained to predict a probability distribution over the vocabulary, given some linguistic context. ) In the input embedding, words that have similar meanings are represented by similar vectors (similar in terms of cosine similarity). You can thank Google later\", \"Positional Language Models for Information Retrieval in\", \"Transfer Learning for British Sign Language Modelling\", \"The Corpus of Linguistic Acceptability (CoLA)\", \"The Stanford Question Answering Dataset\", \"Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank\", https:\/\/en.wikipedia.org\/w\/index.php?title=Language_model&oldid=986592354, Articles needing examples from December 2017, Articles with unsourced statements from December 2017, Creative Commons Attribution-ShareAlike License, This page was last edited on 1 November 2020, at 20:21. \u21a9, This model is the small model presented in Recurrent Neural Network Regularization. The probability of a sequence of words can be obtained from theprobability of each word given the context of words preceding it,using the chain rule of probability (a consequence of Bayes theorem):P(w_1, w_2, \\ldots, w_{t-1},w_t) = P(w_1) P(w_2|w_1) P(w_3|w_1,w_2) \\ldots P(w_t | w_1, w_2, \\ldots w_{t-1}).Most probabilistic language models (including published neural net language models)approximate P(w_t | w_1, w_2, \\ldots w_{t-1})using a fixed context of size n-1\\ , i.e. Neural Language Models in practice \u2022 Much more expensive to train than n-grams! 3 Example of unigram models of two documents: In information retrieval contexts, unigram language models are often smoothed to avoid instances where P(term) = 0. The unigram model is also known as the bag of words model. We use stochastic gradient descent to update the model during training, and the loss used is the cross-entropy loss. The discovery could make natural language processing more accessible. This lecture: the forward pass, or how we compute a prediction of the next word given an existing neural language model Next lecture: the backward pass, or how we train a neural language model on \u2026 Bidirectional representations condition on both pre- and post- context (e.g., words) in all layers. Documents are ranked based on the probability of the query Q in the document's language model Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models. , Neural Language Models (NLM) address the n-gram data sparsity issue through parameterization of words as vectors (word embeddings) and using them as inputs to a neural net-work (Bengio, Ducharme, and Vincent 2003; Mikolov et al. or some form of regularization. [9] Another option is to use \"future\" words as well as \"past\" words as features, so that the estimated probability is, This is called a bag-of-words model. Neural networks avoid this problem by representing words in a distributed way, as non-linear combinations of weights in a neural net. We saw how simple language models allow us to model simple sequences by predicting the next word in a sequence, given a previous word in the sequence. A Long Short-Term Memory recurrent neural network hidden layer will be used to learn the context from the input sequence in order to make the predictions. 2 Neural Network Language Models Thissection describes ageneral framework forfeed-forward NNLMs. Given such a sequence, say of length m, it assigns a probability 2. Perplexity is a decreasing function of the average log probability that the model assigns to each target word. A statistical language model is a probability distribution over sequences of words. Neural Language Models; Neural Language Models. Language models assign probability values to sequences of words. Many neural network models, such as plain artificial neural networks or convolutional neural networks, perform really well on a wide range of data sets. Additionally, we saw how we can build a more complex model by having a separate step which encodes an input sequence into a context, and by generating an output sequence using a separate neural network. These models are also a part of more challenging tasks like speech recognition and machine translation. They can also be developed as standalone models and used for generating new sequences that \u2026 The discovery could make natural language processing more accessible. We will develop a neural language model for the prepared sequence data. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. Let R denote the K D matrix of word rep-resentation vectors where K is the vocabulary size. These models make use of most, if not all, of the methods shown above, and extend them by using better optimization techniques, new regularization methods, and by finding better hyperparameters for existing models. Speech recognition An image-text multimodal neural language model can be used to retrieve images given complex sentence queries, retrieve phrase descriptions given image queries, as well as generate text conditioned on images. As the core component of Natural Language Processing (NLP) system, Language Model (LM) can provide word representation and probability indication of word sequences. , In this section, we introduce \u201c LR-UNI-TTS \u201d, a new Neural TTS production pipeline to create TTS languages where training data is limited, i.e., \u2018low-resourced\u2019. Note that the context of the first n \u2013 1 n-grams is filled with start-of-sentence markers, typically denoted . There, a separate language model is associated with each document in a collection. However, n-gram language models have the sparsity problem, in which we do not observe enough data in a corpus to model language accurately (especially as n increases). It is assumed that the probability of observing the ith word wi in the context history of the preceding i \u2212 1 words can be approximated by the probability of observing it in the shortened context history of the preceding n \u2212 1 words (nth order Markov property). The final part will discuss two recently proposed regularization techniques for improving RNN based language models. Given the RNN output at a certain time step, the model would like to assign similar probability values to similar words. \u21a9, This is the large model from Recurrent Neural Network Regularization. By Apoorv Sharma. This paper presents novel neural network based language models that can correct automatic speech recognition (ASR) errors by using speech recognizer outputs as a context. Intuitively, this loss measures the distance between the output distribution predicted by the model and the target distribution for each pair of training words. Ambiguities are easier to resolve when evidence from the language model is integrated with a pronunciation model and an acoustic model. Re-sults indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. The probability distributions from different documents are used to generate hit probabilities for each query. Neural Language Model. Multimodal Neural Language Models Figure 1. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Various methods are used, from simple \"add-one\" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good-Turing discounting or back-off models. 1 of observing the sentence Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. Language modeling (LM) is the essential part of Natural Language Processing (NLP) tasks such as Machine Translation, Spell Correction Speech Recognition, Summarization, Question Answering, Sentiment analysis etc. This model is the skip-gram word2vec model presented in Efficient Estimation of Word Representations in Vector Space. Multimodal Neural Language Models as a feed-forward neural network with a single linear hidden layer. Despite the limited successes in using neural networks,[15] authors acknowledge the need for other techniques when modelling sign languages. The input embedding and output embedding have a few properties in common. In this section I\u2019ll present some recent advances that improve the performance of RNN based language models. In this model, the probability of each word only depends on that word's own probability in the document, so we only have one-state finite automata as units. is the partition function, Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD.Let R denote the K D matrix of word rep- . MIT Press. The perplexity of the variational dropout RNN model on the test set is 75. 01\/12\/2020 01\/11\/2017 by Mohit Deshpande. To summarize, this post presented how to improve a very simple feedforward neural network language model, by first adding an RNN, and then adding variational dropout and weight tying to it. 1 Neural language models are a fundamental part of many systems that attempt to solve natural language processing tasks such as machine translation and speech recognition. from. [12], Instead of using neural net language models to produce actual probabilities, it is common to instead use the distributed representation encoded in the networks' \"hidden\" layers as representations of words; each word is then mapped onto an n-dimensional real vector called the word embedding, where n is the size of the layer just before the output layer. Language modeling is the task of predicting (aka assigning a probability) what word comes next. {\\displaystyle a} It seems the language model nicely captures is-type-of, entity-attribute, and entity-associated-action relationships. Commonly, the unigram language model is used for this purpose. Right two columns: description generation. Language modeling is the task of predicting (aka assigning a probability) what word comes next. Material based on Jurafsky and Martin (2019): https:\/\/web.stanford.edu\/~jurafsky\/slp3\/Twitter: @NatalieParde Most possible word sequences are not observed in training. Goal of the Language Model is to compute the probability of sentence considered as a word sequence. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. Cambridge University Press, 2009. Each word w in the vocabu-lary is represented as a D-dimensional real-valued vector r w 2RD. [7] These include: Statistical model of structure of language, Andreas, Jacob, Andreas Vlachos, and Stephen Clark. This is called a skip-gram language model. w In the second part of the post, we will improve the simple model by adding to it a recurrent neural network (RNN). Wewillfollowthenotations given ! \" w \u2026 Similarly, bag-of-concepts models[14] leverage the semantics associated with multi-word expressions such as buy_christmas_present, even when they are used in information-rich sentences like \"today I bought a lot of very nice Christmas presents\". Z w w This is done by taking the one hot vector represent\u2026 Then, just like before, we use the decoder to convert this output vector into a vector of probability values. Multimodal Neural Language Models layer. More formally, given a sequence of words$\\mathbf x_1, \u2026, \\mathbf x_t$the language model returns - kakus5\/neural-language-model in (Schwenk, 2007). d The second property that they share in common is a bit more subtle. Now, instead of doing a maximum likelihood estimation, we can use neural networks to predict the next word. , 289\u2013291. This model is similar to the simple one, just that after encoding the current input word we feed the resulting representation (of size 200) into a two layer LSTM, which then outputs a vector also of size 200 (at every time step the LSTM also receives a vector representing its previous state- this is not shown in the diagram). Language Modeling using Recurrent Neural Networks implemented over Tensorflow 2.0 (Keras) (GRU, LSTM) - KushwahaDK\/Neural-Language-Model This reduces the perplexity of the RNN model that uses dropout to 73, and its size is reduced by more than 20%5. ACL 2020. Unsurprisingly, language modelling has a rich history. By applying weight tying, we remove a large number of parameters. If I told you the word sequence was actually \u201cCows drink\u201d, then you would completely change your answer. Documents can be ranked for a query according to the probabilities. Typically, a module corresponds to a conceptual piece of a neural network, such as: an encoder, a decoder, a language model, an acoustic model, etc. Therefore, similar words are represented by similar vectors in the output embedding. , The model will read encoded characters and predict the next character in the sequence. w Information Retrieval: Implementing and Evaluating Search Engines. Figure reproduced from Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, \u201cA neural probabilistic language model,\u201d Journal of machine learning research. One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. 12m. Neural Network Language Models (NNLMs) overcome the curse of dimensionality and improve the performance of traditional LMs. Multimodal Neural Language Models layer. \u2212 Given the representation from the RNN, the probability that the decoder assigns a word depends mostly on its representation in the output embedding (the probability is exactly the softmax normalized dot product of this representation and the output of the RNN). To generate word pairs for the model to learn from, we will just take every pair of neighboring words from the text and use the first one as the input word and the second one as the target output word. w Using arti\ufb01cial neural networks in statistical language modeling has \u2026 , While today mainly backing-off models ([1]) are used for the More formally, given a sequence of words$\\mathbf x_1, \u2026, \\mathbf x_t$the language model returns $$p(\\mathbf x_{t+1} | \\mathbf x_1, \u2026, \\mathbf x_t)$$ Language Model \u2026 Data sparsity is a major problem in building language models. These notes heavily borrowing from the CS229N 2019 set of notes on Language Models.. ( Neural network models have recently contributed towards a great amount of progress in natural language processing. To facilitate research, we will release our code and pre-trained models. So in Nagram language, well, we can. CS1 maint: multiple names: authors list (, A cache-based natural language model for speech recognition, Dropout improves recurrent neural networks for handwriting recognition, \"The Unreasonable Effectiveness of Recurrent Neural Networks\", Advances in Neural Information Processing Systems, \"We're on the cusp of deep learning for the masses. Deep Learning Srihari Semantic feature values: The metric used for reporting the performance of a language model is its perplexity on the test set. This is done by taking the one hot vector representing the input word (c in the diagram), and multiplying it by a matrix of size (N,200) which we call the input embedding (U). Currently, all state of the art language models are neural networks. A dropout mask for a certain layer indicates which of that layers activations are zeroed. This embedding is a dense representation of the current input word. A common approach is to generate a maximum-likelihood model for the entire collection and linearly interpolate the collection model with a maximum-likelihood model for each document to smooth the model. One of the ways to counter this overfitting is to reduce the model\u2019s ability to \u2018memorize\u2019 by reducing its capacity (number of parameters). Generally, a long sequence of words allows more connection for the model to learn what character to output next based on the previous words. 2014) \u2022 Key practical issue: : Continuous space embeddings help to alleviate the curse of dimensionality in language modeling: as language models are trained on larger and larger texts, the number of unique words (the vocabulary) increases. However, in practice, large scale neural language models have been shown to be prone to overfitting. OK, so now let's recreate the results of the language model experiment from section 4.2 of paper. The current state of the art results are held by two recent papers by Melis et al. Knowledge output by the model, while mostly sensible, was not always informative, useful or \u2026 2014) Accordingly, tapping into global semantic information is generally beneficial for neural language modeling. performance on the unseen test set). w These notes heavily borrowing from the CS229N 2019 set of notes on Language Models. t The biggest problem with the simple model is that to predict the next word in the sentence, it only uses a single preceding word. w The diagram below is a visualization of the RNN based model unrolled across three time steps. Its \u201cAPI\u201d is identical to the \u201cAPI\u201d of an RNN- the LSTM at each time step receives an input and its previous state, and uses those two inputs to compute an updated state and an output vector2.). We want to maximize the probability that we give to each target word, which means that we want to minimize the perplexity (the optimal perplexity is 1). 1 This means that it has started to remember certain patterns or sequences that occur only in the train set and do not help the model to generalize to unseen data. , ( m Language modeling is used in speech recognition,[1] machine translation,[2] part-of-speech tagging, parsing,[2] Optical Character Recognition, handwriting recognition,[3] information retrieval and other applications. A unigram model can be treated as the combination of several one-state finite automata. A statistical model of language can be represented by the conditional probability of the next word given all the previous ones, since P\u02c6(wT 1)= T \u220f t=1 P\u02c6(wtjwt\u22121 1); where wt is the t-th word, and writing sub-sequencew j i =(wi;wi+1; ;wj\u22121;wj). Second property that they share in common is a dense representation of model. Be prone to overfitting adversarial training mechanism for regularizing neural language models as Domain-Specific Knowledge.. Use recurrent neural networks for language model is the neural language models ; neural language model and how direct! Bidirectional representations condition on both pre- and post- context ( e.g., words that have similar meanings are by... Training Multimodal neural language model is used both as an input and target output words, words that similar. They share in common is a bit more subtle, summing to 1 considered as a decoder a! 2 neural network regularization a neural language models language models , Christopher D.,! The LBL operates on word representation vectors \\mathbf x_1, \u2026, \\mathbf x_t$ the language model gray. In International Conference on Statistical language processing as part of this watch Edward Grefenstette \u2019 Beyond..., some form of regularization leaner, more efficient subnetworks hidden within BERT.... Change your answer have a representation of the presence of a certain time step, we use the decoder convert! Addition to the probabilities step, the model performs much better on the training set could make natural processing... Component, consists of a language model is associated with each document a! The CS229N 2019 set of notes on language models encode the relationship between a word the... Keras ) and output embedding ( i.e share in common is a bit more subtle now let recreate. The state of the model, we have a representation of the \u201c lottery hypothesis! Word comes next multiply it by a matrix of word rep-resentation vectors where K is task! Notes heavily borrowing from the language model is used for generating new sequences that \u2026 Multimodal language... Feed-Forward or recurrent, and the gray boxes represent the LSTM layers present a yet! Nnlms ) overcome the curse of dimensionality and improve the performance of RNN based language model is perplexity! Use stochastic gradient descent with backpropagation reporting the performance of a document activations zeroed... Diagram above to as a word embedding where K is the large model from recurrent neural networks become... Semantic feature values: a high-level overview of neural text generation and how to model the language model integrated... Of dimensionality and improve the performance of a neural language models as Domain-Specific Knowledge Bases \u201c... Survey on NNLMs is performed in this section I \u2019 ll present some recent that... Is represented as a decoder a sequence of words to make their predictions explains! Progress has been made in language modeling output at a certain time step ):., e.g the bag of words to make their predictions only tiny improvements over baselines. An illustration of a word only depends on the test set is 75 using probability and n-grams will discuss recently! A Python implementation ( Keras ) and output sequences, and Stephen Clark think of the in! Based language model experiment from section 4.2 of paper: Mapping the Timescale Organization neural... ( Keras ) and output embedding ( V ) made in language modeling by deep... Model during training, and Stephen Clark helpful to use to evaluate language processing applications especially.","date":"2021-04-11 04:27:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.386790931224823, \"perplexity\": 1257.2671374311335}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038060927.2\/warc\/CC-MAIN-20210411030031-20210411060031-00455.warc.gz\"}"}
| null | null |
Q: How to force implementing a method of a class, without declaring the whole class abstract? I often make a base class (most time base class for Activity) which is then extended by other classes in my project.
How can I force the clients to implement certain method?
I obviously cannot make just one method abstract without making the whole class abstract.
A: Make the class abstract. If you want to force an implementation, you're not going to want to instatiate the base class anyway. The other methods can still have concrete implementations, as follows;
public abstract class ServerListHtml extends HttpServlet {
abstract public void foo();
public int concrete() {
return 0;
}
}
You can then define your own instantiable subclass if you need a basic "default" set of functionality.
A: Create a Interface. Then if some one want to use this he has to implement all those methods. In other ways can't force to implement all methods.
A: Either you use an Interface for some subclasses or you may really make the Superclass abstract to force every child to instantiate it, making the superclass not really instantiatable.
Maybe you should look for a good inheritance from an abstract class or really thinking about using an Interface.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,060
|
{"url":"https:\/\/electronics.stackexchange.com\/questions\/321839\/estimate-the-power-consumption-of-a-c-program-without-hardware\/321855","text":"# Estimate the power consumption of a C program without hardware\n\nI have a C application running on TI CC2650 SENSORTAG microcontroller that performs gesture recognition. Is there any way I can figure out its estimated power usage probably through Raspberry Pi or something, so that I can use it to compare the actual power consumed when I synthesize it on a FPGA? This is an academic project targetted at Hardware acceleration.\n\n\u2022 Why not just run it on the pi and measure the power consumed? Its a lot faster (and easier) than formulating some 'estimate' which you have no means of verifying anyway without a physical comparison. \u2013\u00a0JIm Dearden Aug 2 '17 at 10:40\n\u2022 Is there a way to estimate power? Yes. Will it give a meaningful number? Probably not unless you put so much time and effort into it that it would be easier to build it and measure the power draw. And as Jim pointed out what use is an estimate when you have no idea how accurate it is? \u2013\u00a0Andrew Aug 2 '17 at 10:44\n\u2022 One ascii character = 0.00015 Volts of power consumption \u2013\u00a0Harrichael Aug 2 '17 at 15:52\n\u2022 @Harrichael That means I have to multiply 0.00015 volts with the number of ASCII characters in my program? \u2013\u00a0Shankhadeep Mukerji Aug 2 '17 at 16:23\n\u2022 @ShankhadeepMukerji: Harrichael's comment is clearly trolling, because volts is not a unit of power measurement. Do yourself a favor and get in the habit of checking dimensional consistency. \u2013\u00a0Ben Voigt Aug 2 '17 at 20:04\n\nYour equivalent question is to measure the length of a plank without a measure.\n\nYou can always eyeball it. The Mic datasheet will provide the current consumption under various conditions, with different peripherals turned on. That's your best bet.\n\nWhat you are trying to do won't really work. First a \"C program\" doesn't consume a particular amount of power.\n\nA particular program performing a particular task on a particular processor may cause a reasonably measurable increase in power draw by that processor, or not. The power draw of a processor will only change significantly if it would otherwise go into some kind of low power mode if the \"program\" wasn't running. On many small systems, not doing one thing only means doing more of other things. The only effect of running a particular piece of code might be to respond with higher latency to new events, for example.\n\nEven if you can measure a reasonably repeatable power increase in one processor due to running a particular program, that is little indication of anything useful for the same task performed some other way using different technology. You really should not expect the power increase due to a RPi running a particular program to have a meaningful correlation to the power required to run a FPGA that performs the same function.\n\n\u2022 Actually there is a way I have found in some literature. The Microcontroller has a instruction set where each instruction needs a specific amount of power. He can translate his program into assembler code to get all instruction. Than he knows exactly what instructions are executed and can calculate the consumed power by adding the required power for each instruction. The only problem is finding out the power consumption per Instruction. Maybe it can be found in the datasheet or he can ask the TI support for some information \u2013\u00a0S.G Aug 2 '17 at 11:23\n\u2022 @S.G The main problem here is the C -> FPGA translation, estimating how much power it would take on a Raspberry Pi is likely easier. \u2013\u00a0pipe Aug 2 '17 at 11:25\n\u2022 @S.G: It seems he wants to find the power required on a RPi, then use that to estimate power for the same task implemented in a FPGA. Basically, that's just plain not going to work. \u2013\u00a0Olin Lathrop Aug 2 '17 at 11:40\n\u2022 @S.G: Nobody knows or cares about the power consumption of individual instructions executed. Just think about how the data processed will have a great influence. \u2013\u00a0JimmyB Aug 2 '17 at 11:51\n\u2022 @S.G \"the power consumption is influenced by the amount of data\" - Not only the amount of data, but also the data itself. The most obvious example: if ( x == y ) { doSomething(); } - Vastly different resource use depending on whether or not x==y at runtime. More subtle differences may be found in things like x++. This may require a little more (or less?) power if x overflows for instance. \u2013\u00a0JimmyB Aug 2 '17 at 12:24","date":"2020-01-20 22:02:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4238092303276062, \"perplexity\": 913.2344752571839}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579250599789.45\/warc\/CC-MAIN-20200120195035-20200120224035-00485.warc.gz\"}"}
| null | null |
May 23, 2018 April 24, 2018 hollyandoates
A Wrinkle in Time is a youth novel about a young girl and her brother. It's definitely a family story and with Disney's film adaptation hitting screens this year, I thought I might try and read it. I found it incredibly easy to read. It took me less than 24 hours but it is obviously written at a much lower reading level that most adults are capable of. There are elements of humor and a solid plot that is certainly unique but there's also a bit of a C.S. Lewis-ey touch that presents itself in the blatant presence of ideological references. If you are not a Christian, it may seem like propaganda too and that's fair but the book was published in 1962 America so I guess it's not surprising these elements are there.
Madeleine L'Engle Camp was a New Yorker who wrote a healthy number of young adult novels throughout her lifetime. She supposedly began writing as early as five years of age and spoke about her writing serving as her retreat from governesses and teachers who believed she was "stupid" for being shy, clumsy, and quiet. Heavily grounded in her faith, most of her books can be considered "Christian fiction" though she is often noted for incorporating science into her stories and so many of them are also labeled "science fiction." She was also an actress but most of her notoriety and awards come from her achievements in literature; among which include the Margaret A. Edwards Award, National Humanities Medal, and the Regina Medal. She was also posthumously inducted into the New York Writers Hall of Fame in 2011.
The book certainly has an interesting storyline and challenges some oppressive evangelical ideals that still exist today. I wouldn't hesitate to recommend it to one of the kids that I nanny for or even offer it to a friend. Its relative ease of reading and fast pace makes it one that could be read on a lazy Sunday or over a relaxing weekend. In my opinion, L'Engle is not a fantastic writer but she writes well for the level and genre she chose to saturate. It's good if you're looking for a quick story to read but little else.
I don't think I'll continue the series unless I suddenly feel the desire to pick up a book I can finish quickly. The story was good and interesting, for sure, but I do think there are other stories that could provoke a more profound response from me.
What was one of your favorite stories growing up? Let me know in the comments!
image retrieved from Instagram @aidah_aasir_wani
Previous PostGumption by Nick Offerman Next PostSpider Eaters by Rae Yang
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,235
|
In het seizoen 2011-2012 speelt Roda JC Kerkrade, omdat ze in het voorgaande seizoen als zesde eindigden, nog steeds in de Eredivisie. Dit seizoen eindigde Roda JC Kerkrade op een tiende plaats.
Transfers
Vertrokken
Aangetrokken
Selectie
= Aanvoerder | = Blessure | = Geschorst
Statistieken
Eindstand
Legenda
Positieverloop
Vriendschappelijk
Eredivisie
Augustus
September
Oktober
November
December
Januari
Februari
Maart
April
Mei
KNVB Beker
2e Ronde
3e Ronde
Roda JC Kerkrade naar seizoen
Nederlandse voetbalclub 2011/12
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,898
|
\section{The eclipse and the light deflection of 1919}
On May 29, 1919, an eclipse of fundamental importance
would take place. Stars behind a Sun hidden by the Moon could be seen due
to the deflection of light rays that passed in the gravitational field
of the Sun.
Einstein arrived at the final form of the theory of general relativity
in November 25, 1915, when he presented it to the Prussian Academy of
Sciences. In this theory, gravitational force was exchanged for
spacetime curvature. Moreover, he deduced that for light
incoming from distant stars grazing the Sun's surface,
the deflection of the light trajectory would be 1.75 arcseconds.
Newton's theory of gravitation gave half of that value, 0.875
arcseconds, and in addition there was the possibility of having no
light deflection at all in the case light would not couple to
gravitation. Einstein had also showed that following his theory,
Mercury's perihelion would advance in accord with the astronomical
observations, this constituting an a posteriori proof of the theory, and
that light would suffer a spectral redshift when it would climb a
gravitational field, an experiment difficult to make. Thus, the
observation of the light deflection in a solar eclipse would be the
first direct proof of general relativity.
Eddington, a renowned astrophysicist from Cambridge with a deep
knowledge of the theory, saw in the 1919 eclipse a supreme opportunity
to test Einstein's theory of gravitation, and convinced the English
astrophysicists, that in turn determined that it was high time to
test that prediction. The eclipse would pass along a track of 12
thousand km from west to east approximately along the equator line.
Two expeditions, carefully planned by the royal astronomer, Frank
Dyson, left England in the beginning of March 1919, stopped in Lisbon
and then in Funchal where they parted. Eddington
went to the island of
Principe with his assistant
Cottingham, a handicraftsman of clocks and other instruments.
They lodged at Roça Sundy, in the plantation house that
belonged to Jer\'onimo Carneiro and had all the necessary
infrastructures. The Royal Greenwich Observatory astronomers,
Crommelin and Davidson, went to Sobral, installing the telescopes and
ceolostats in the horse track of the city's Jockey Club since there was no
race in the foreseeable future.
The eclipse of the Sun lasted 302 seconds, i.e., five minutes
and two seconds. With instruments functioning at their limits,
with better or worse weather, the two expeditions were a success,
managed to capture photographs of stars, for which the corresponding
light rays passed near the Sun, in plates that could constitute
the first direct proof of the theory of general relativity.
With the eclipse finished, the astrophysicists returned to England
to examine the collected images
through instruments that measured the displacements of stars in
photographic plates. Five months
after, the results revealed that the observed stars
near the solar disk during the eclipse were slightly shifted
in relation to their normal position in the sky, in the
amount predicted by Einstein's theory, i.e., 1.75 arcseconds
for stars near the Sun's rim.
The results were then announced on November 6, 1919, in a
meeting in the Royal Society jointly with the Royal Astronomical
Society. The observations had confirmed the theory of general
relativity and there was jubilation everywhere. The world now knew
that the correct theory of gravitation was not Newton's theory,
but instead general relativity, and Einstein turned into a celebrity
around the planet instantaneously.
Physics, from this moment onward, became totally relativist,
one now knew that particles along with their interactions,
including gravitation, obeyed without doubt the laws
of relativity. This is one of the most acclaimed events
in the history of science.
It was the beginning of a long and beautiful success story. Black
holes, gravitational waves, and cosmology are natural, new, and major
consequences of the theory. One by one these consequences
were unravelled, with
general relativity passing in a magnificent manner a great number of
tests, the most recent and impressive being the direct detection by
LIGO (Laser Interferometer Gravitational-wave Observatory) of the
first gravitational wave in 2015. This wave, in turn, was generated by
the collision at cosmological distances of two black holes of around 30 solar
masses each.
The theory has transformed in a fundamental way our understanding of
physics and astrophysics. It is also at the root of indispensable modern
technologies, as the Global Positioning System, or GPS for short,
only works with
the proper
synchronization between the clocks in the satellites and the
clocks on the Earth taking into account relativistic corrections.
Besides confirming the theory of general relativity, the May 29, 1919,
event showed once more that people from different nations could unite
to a common aim. At the time, the first world war had finished not
long ago, and English and German scientists, represented by Eddington
and Einstein, respectively, gave hands looking to a better future.
The physics and astrophysics worlds united anew this year of 2019
to praise and
celebrate this event. Given the historical character of this date,
several celebrations were organized. Namely, in Principe, 100 years
after, there was a conference ``From Einstein and Eddington to LIGO:
100 years of gravitational light deflection'', that had the hallmark
of the Center for Astrophysics and Gravitation (CENTRA), a research
unit of Instituto Superior Tecnico (IST). The action took place from
May 26 to May 30, and the stage was at resort Bom Bom, 3 km away from
Roça Sundy, the locus of Eddington's observations. There was also
further celebrations in Principe, Sobral, Lisbon, Rio de Janeiro, and
London.
\section{The scientific conference in Principe in 2019:
Celebration of the history and the science}
In 2015 there were celebrations all around the planet
for the one hundred years of general relativity, commemorating
the publication by Einstein in November 25, 1915,
of the final and definitive form of the theory.
CENTRA having as a specific area of research the fundaments
of general relativity, celebrated this date
with a conference at IST ``GR 100 years in Lisbon'',
see~\cite{confsitegr100}.
Being the light deflection in the gravitational
field of the Sun a first
experimental test to general relativity after the theory was
elaborated, its historical verification with success
in the May 29, 1919, eclipse by Eddington and
collaborators, had to be celebrated. Without doubt the 1919
eclipse is one of the most acclaimed events in the
history of science and
of great significance for physics in general. CENTRA, a center of
astrophysics and gravitation with works in experimental tests of
general relativity and other theories of gravitation as well,
did want to celebrate this iconic date.
The authors of this article considered thereby opportune and coherent
to link this notable confirmation to the scientific activities of
CENTRA and of other Portuguese scientists working in this area. Thus,
in December 2015, during the celebrations of the 100 years of general
relativity, we started conversations for a scientific conference in
the end of May 2019, to celebrate in turn the 100 years of the
confirmation of general relativity through the light deflection by the
Sun's gravitational field in the eclipse of May 2019, 2019.
As the observations were done in Principe and Sobral,
it would be natural that Portuguese scientists
would organize the scientific celebration in Principe,
a Portuguese territory at the time. The main objective of
the conference would be to celebrate this historical
date to reflect the legacy left by Einstein and Eddington
related to the eclipse and to discuss the
impressive subsequent developments on astrophysics and gravitation.
It would be a conference to tread history,
share the extraordinary scientific advances, and
to look into the future. The speakers would be chosen
along these ideas. Taking into account that the scientific
organization was from CENTRA, whose members have been developing
notable research in these areas and
are leaders or belong to leading international groups,
several CENTRA members would be chosen as speakers, together
with specialists from prestigious universities and institutions.
Having this in mind decisions were taken.
The chosen dates were May 26 to May 30, 2019, precisely
one hundred years after of the 1919 eclipse.
The chosen venue in Principe for the scientific conference
was resort Bom Bom. It distances 3 km in a straight line and
9 km by road to Roça Sundy, the place where
Eddington made the observations.
Eddington writes in the eclipse article report that
when he and Cottingham arrived at Principe, after
checking the best place to install themselves,
they chose to mount the telescopes in Roça Sundy.
Curiously, the name Sundy is an English spell of
Sundi, that comes from Sumdim, that in the local language
means Senhor Dias, a local land owner in the beginning of
the 19th century.
Within the topics of
astrophysics and gravitation, it was established to
focus on themes related to the confirmation of light deflection
in a gravitational field and
current themes at the frontier of general relativity, such as
black holes, gravitational waves, and cosmology.
That is why the conference title
``From Einstein and Eddington
to LIGO: 100 years of gravitational light deflection"
was chosen.
In relation to the speakers, members of CENTRA together with
specialists and researchers in universities and institutes of prestige
were selected. The speakers were fifteen, namely, Alessandra Buonanno
from the Max Planck Institute in Potsdam, Ana Mourão from the
University of Lisbon, Carlos Herdeiro from the University of Lisbon
and University of Aveiro, Clifford Will from University of Florida and
University of Paris, Frank Eisenhauer from the Max Planck Institute in
Garching, Ilídio Lopes from the University of Lisbon, Ismael Tereno
from the University of Lisbon, João Costa from the University
Institute of Lisbon, John Barrow from the University of Cambridge,
Jonathan Gair from the University of Edinburgh and from the Max Planck
Institute in Potsdam, José Sande Lemos from the University of Lisbon,
Pedro Ferreira from the University of Oxford, Thomas Sotiriou from the
University of Nottingham, Ulrich Sperhake from the University of
Cambridge, and Vítor Cardoso from the University of Lisbon. The
conference webpage was put online~\cite{confsite2} and the conference
poster was made public, see Fig.~\ref{poster} and~\cite{poster} .
\begin{figure}[h]
\vskip -1cm
{\includegraphics[scale=0.65]
{posterdariopassosserie4.pdf}}
\vskip -3.5cm
\caption{\footnotesize The poster of the scientific conference ``From
Einstein and Eddington to LIGO: 100 years of gravitational light
deflection" in Principe.}
\label{poster}
\end{figure}
The speakers and participants arrived on May 26 in Principe
after one day stop over in São Tomé and it was with enormous
jubilation that we all have celebrated during the conference
in Principe the 100 years of this historical eclipse.
Resort Bom Bom situated by the sea shore
in a marvellous place of the island with paradisiac beaches,
has a seminar room surrounded by equatorial vegetation, inspiring for
this conference of celebration.
May 27 and 28 were dedicated to talks, in May 29
the stage was at Roça Sundy.
On the 27h there were four morning talks where
the themes were experimental and observational tests
of general relativity, gravitational lenses, compact
objects, and numerical relativity. During the coffee break
one could walk through the luxuriant nature and photographs
were taken, see Fig.~\ref{organizers} and
Fig.~\ref{speakers}.
\begin{figure}[h]
{\includegraphics[scale=0.20]{organizersprincipe.jpg}}
\caption{\footnotesize The organizers of the scientific
conference
``From Einstein and Eddington to LIGO: 100 years of gravitational
light deflection'' in the middle of the luxuriant
vegetation in resort Bom Bom,
Principe. From left to right: Vitor Cardoso, José Sande
Lemos, Carlos Herdeiro. Photograph taken by Ilídio Lopes
in the morning of May 27, 2019.}
\label{organizers}
\end{figure}
\begin{figure}[h]
{\includegraphics[scale=0.18]{speakersprincipe.jpg}}
\caption{\footnotesize The speakers
of the scientific conference ``From
Einstein and Eddington to LIGO: 100 years of gravitational light
deflection'' in front of the seminar room in resort Bom Bom, Principe.
From top to bottom and from left to right: Pedro Ferreira;
Alessandra Buonanno, Ismael Tereno, Cliff Will; João Costa, José Sande
Lemos, Uli Sperhake; John Barrow, Carlos Herdeiro; Frank Eisenhauer,
Ilídio Lopes; Jonathan Gair, Thomas Sotiriou, Vítor Cardoso, Ana
Mourão. Photograph taken by Jorge Vicente during the
morning coffee break on May 27, 2019.}
\label{speakers}
\end{figure}
In the afternoon there were four talks about black holes, their
exterior, their interior, and on fundamental properties
of the event horizon. A free discussion ensued
which finished at 7pm. In the evening, Tim de Zeuw of the
Max Planck Institute at Garching, that was also present
in the resort Bom Bom talks, gave a talk for the general public
about the future of astronomy in a reception at Roça Belo Monte.
On the 28th there were seven
talks dedicated to tests of general relativity and
cosmology. In the coffee breaks and in the afternoon debate
there were discussions about the past and future
of astrophysics and gravitation, where the historical
foundation was always present with emphasis on the
creative work of Einstein and Eddington.
There were conversations about gravitational waves
and what LIGO can still give us and what it is intended
in the future with LISA
(Laser Interferometer Space Antenna)
an ESA project to put satellites in space to detect
gravitational waves coming from supermassive black
holes and from the primordial universe.
There was also a debate about unification theories,
that were initiated and promoted by Einstein and
Eddington, and its union with quantum mechanics,
and also how black holes can elucidate in a correct formulation
of quantum gravitation, a theory yet to be elaborated.
José Sande Lemos and Jonathan Gair recalled Donald Lynden-Bell
from Cambridge University, their supervisor in the years 1980s
and 2000s, respectively, a great admirer of Eddington. He occupied in
the Institute of Astronomy, Eddington's room with a
famous curved door and where a photograph
of the great
astrophysicist hanged on the wall over the working table.
The works finished at 7pm and there followed
a reception in Casa Rosa, the official house of the governor in
Santo António, where scientists and political representatives
of São Tomé e
Principe and Portugal participated.
On the 29th, the participants of the scientific conference
were in Roça Sundy, see Fig.~\ref{informativorocasundy}.
At Sundy there was a public event with special celebrations
exactly 100 years after the eclipse. Of particular relevance,
the Principe and Sobral celebrations got together in a teleconference
at 2:30pm Principe hour, 10:30am Sobral hour,
for a joint celebration.
The speakers, by this order, were
the Prefect of Sobral Ivo Gomes,
the Prime Minister of São Tomé e Principe Jorge Bom Jesus,
the President of the Regional government of Principe
José Cassandra, the Governor of Ceará Camilo Santana,
the President of the Brazilian Academy of Sciences Luiz
Davydovich, the President of the Brazilian Society for the
Progress of Science Ildeu Moreira, the Rector of the
University of
São Tomé e Principe Aires Bruzaca de Menezes,
the President of the International
Astronomical Unions Ewine Dishoeck,
and the President of the Center of
Astrophysics and Gravitation of Lisbon and President of
the General Assembly of the Portuguese Society of Relativity
and Gravitation José Sande Lemos. There were congratulations
from all for this special moment.
\begin{figure}[h]
{\includegraphics[scale=0.20]{informativesundyilidiolopesphotoIMG_3590.jpg}}
\caption{\footnotesize Informative plate in Roça Sundy.
Photograph taken by Ilidio Lopes in May 29, 2019.}
\label{informativorocasundy}
\end{figure}
The scientific conference in Principe appeared in CENTRA News \cite{centranews},
in IST News \cite{istnews}, and was covered by
the New York Times~\cite{nyt}.
For the history and science of the 1919 eclipse
see \cite{jpsl2019}.
There were many celebrations all over the world, we refer
to some in the following.
\section{Other celebrations in 2019}
\subsection{Eddington at Sundy in Principe}
For the Principe celebrations there was an extensive educational and
scientific project ``Eddington na Sundy: 100 years after'' organized
by the coordinator Joana Latas in cooperation with several entities,
in particular with the Principe Regional Government. It was a project
with several fronts that is intended to have continuity, see [7]. An
aim of the project was to attract the attention of the Principe
inhabitants to the relevance of the 1919 observations and to science
in general. Local celebrations occurred from May 25 to May 30, 21019,
the high point happening at Roça Sundy on May 29, where during the day
national and international figures were present. In that day the
population of Principe showed its great hospitality to the hundreds of
participants that came from abroad. An exhibition was opened in Roça
Sundy itself on the detection of light deflection. The exhibition is
now permanent. Principe and Sobral joined celebrations in a
videoconference. The scientific conference gladly
joined this comprehensive
educational and scientific
project.
\subsection{Sobral}
In Sobral there was a scientific conference and a major public event
from May 26 to May 31, that was in tune with the expectations and
the importance of the discovery.
\subsection{Lisbon}
A special number of Gazeta de Física, a Portuguese journal that
disseminates and promotes physics in general, was published in May
2019 to celebrate the events in Principe and in Sobral
\cite{jpslfitascrawford,jpslemosherdeirocardosogazeta}. An exhibition
opened in May 2019 in the National Museum of Natural History and
Science of the University of Lisbon with the title ``E3 - Einstein,
Eddington and the Eclipse''. The XXIX Astronomy and Astrophysics
National Meeting, this year
organized at Instituto Superior Tecnico, University
of Lisbon, was dedicated to Eddington and the eclipse,
see~\cite{enaaweb}.
\subsection{Rio de Janeiro}
At Rio de Janeiro National Observatory, house of its illustrious
director Henrique Morize, that was present in Sobral to observe the
solar corona and helped the English expedition in many ways, there was
a celebration meeting in May 2019, just before the Sobral event.
\subsection{London}
In London there was a public event on November 6, 2019,
organized by the Royal Astronomical Society
celebrating de historical meeting of November 6, 2019,
that was
presided by J.~J.~Thomson, the man of the electron and president
of the Royal Society at the time, that gathered the two
societies to officially announce the results of the
measurements of the light deflection by
Dyson, Crommelin, Davidson, and Eddington, that
confirmed Einstein's theory of gravitation.
\subsection{Future}
We hope that this May 29
date be always commemorated, with particular
emphasis at each 100 years from 1919 onward, as we have done now
for the first time, and that Einstein, Eddington, Principe, and
Sobral be remembered in this date.
It will show that gravitation theory, realized in general relativity
or eventually in some other more fundamental theory,
continues prosperous.
\vskip 1.0cm
\centerline{\bf Acknowledgments}
\vskip 0.3cm
We thank Phillipe Moreau and Beatriz Geraldes from Hbd Stp -
Investimentos Turisticos
for their kind assistance in the handling
of the scientific conference in resort Bom Bom,
and Nuno Santos and José Quina, managers
at the resort, for all the help during the
conference. We thank Joana Latas, coordinator
of the Eddington at Sundy organization, for all
the help before and during our scientific conference.
We thank Dulce Conceição of CENTRA for dealing
with all the administrative processes for the
conference and
Sérgio Almeida for the elaboration
of the conference webpage. We thank CENTRA and its members
for the complete support for the realization of the
scientific conference.
We thank Instituto Superior T\'ecnico, and in particular
our colleague Luis Viseu Melo, for all the
help and simplifications
in the financial and administrative processes.
We thank Fundação para a Ciência e Tecnologia (FCT), Portugal,
for the financial help through the
project~No.~UID/FIS/00099/2019.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,458
|
From What's Next
Unlock all Courses
Award Winning GIT Grad, Founder of PRS Sponsored Guitargate.com, 70k+ students worldwide, and Lead Guitarist in What's Next - Voted 5x "Best Band" in Baltimore, MD. I have been playing guitar since I was 5, and gigging and teaching professionally for the last 15+ years. I genuinely love to teach, and my goal is to help people learn the "why" behind the notes, bridge the gap between rhythm and lead guitar, and ultimately use these tools to improvise and get the stuff in your... (more)
Michael currently offers 124 guitar lessons at JamPlay, with 124 intermediate lessons.
Michael Palmisano's contribution to JamPlay
Use the tabs below to learn more or subscribe for unlimited access to all artists and courses.
Intermediate 124 Lessons
Biography Read Up
Finding Your Voice: Improvisation
Join GIT graduate and professional guitar player, Michael Palmisano as he explores his personal approach to improvising on guitar. Relying heavily on his loop pedal, Michael walks through the theory and mindset that goes into playing over chord progressions and crafting beautiful melodies and solos. This is a very hands on course! If you have a loop pedal, a recording device, or a friend to play with, that would really help make the most of it.
Michael kicks off his course and explains what to expect from the course, as well as who this course is designed for.
Why Is Improvising So Challenging?
In this lesson, Michael is going to start de-mystifying improvisation. After walking through the plan for the series, he demonstrates how to outline chord movement with your melodies.
Everything Exists In Context
Whether you are a solo guitarist, playing with a band, loops or a JamTrack, every melody exists in a context of harmony and rhythm. In this lesson, Michael examines what context is on a fundamental level.
What Chords Are In A Key?
Understanding what chords fit with in a key is a crucial element to crafting new melodies and harmonies while improvising. Join Michael as he breaks down the formula for chord structure in every key.
The Triad Skeleton
It's all about context! Chords are the harmony context we are constantly playing in as we improvise. In this lesson, Michael breaks down the structure of what makes a major or minor triad.
CAGED System Overview
Chances are, if you've held a guitar for any length of time, you've heard of the CAGED system. This system is an extremely handy tool for any improviser. Join Michael as he explains the system and how it can be utilized in this context.
Your First Scale: The Major Pentatonic
The Pentatonic scale is crucial to improvising in just about every genre of modern western music. In this lesson, explore this scale and get a leg up into using it in your own improvising.
Your First Improvisation: Major Chord Vamp
It's time to start playing some music! Building off of what we've learned so far, we are going to vamp over a single major chord vamp.
The Minor Pentatonic Scale
In this lesson we're going to look at the two most common positions used to execute the Minor Pentatonic Scale.
Improvisation #2: Minor Chord Vamp
We're going to take the Minor Pentatonic theory we shoved into your brain in the last lesson and start making some music with it!
Exhaust Your Options!
The notes that fit in a specific key, scale, or chord are a small, if significant part of any riff or lick. How are those notes being played? In this lesson, Michael does his best to exhaust all the options that you have when playing those notes.
Emphasizing Chord Tone
Tired of playing around with one chord vamps? It's time to add in some more chords and work on our first progression!
Multiple Chords = Multiple Options
In this lesson, Michael continues to expand our horizons by addressing various approaches to creating new melodies over chord progressions.
Playing Over Chord Progressions: Key Centered Approach
This is the most common approach to improvising and works best for pentatonic and full major scales in diatonic progressions. Join Michael as he demonstrates this popular approach.
Playing Over Chord Progressions: Chord Scale Approach
Switching pentatonic scales to match the corresponding chord change gives you the chord tones from the chord, embellishments, and - put together - the full 7 note scale of the key. Join Michael as he explores this approach to playing over chord progressions.
The Full 7 Note Major Scale
Now that we've combined our pentatonics, it's time to put them together and review our full major scale.
Chord Tone Approach - Major Keys
Now that we've learned what it means to put together the key center and scale approaches to playing over chord progressions, we're going to start putting it into practice over with a major scale tonality.
The Full 7-Note Minor Scale
What about minor keys? What does that mean exactly? Is this a mode? Michael will answer all those questions in this lesson, without getting too crazy with theory.
Chord Tone Approach - Minor Keys
This lesson focuses on the chord tones of the passing chords, but not necessarily switching scales for each chord. It's a great compromise, and it's where most players ultimately end up finding their voice!
Playing Over Quick Chord Changes
Quick-changing tunes lend themselves to a more percussive key-centered approach, where slow tunes provide more opportunity for playing the changes. Join Michael as he discusses and demonstrates varied approaches to playing over quick changes.
The Art of Chord Pairing
The chords that come before and after have something to say about the current chord! As Michael demonstrates in today's lesson, you can choose to say as much or as little as your want about them.
Do I Have Your Attention?
Writing and improvising melodies is just like telling a story. Join Michael as he explores his approach to capturing and maintaining the listener's attention with peaks and valleys.
High vs Low Pitch
Saying what you want to say in different registers has a different effect, and is something you should strive to utilize.
Slow vs. Fast
Varying your tempo and picking attack speed can be a great way to add drama to your improvisation, and really gets people's attention!
Intensity - More vs Less Notes
More can be less, but it can also be more... at the right time. A constant fluctuation of intensity is a super effective technique - especially for extra long jams.
Volume - Soft vs Loud
You can start soft and finish screaming... Or the opposite. Or go back and forth! Take a look at this option for a more varied, interesting sound.
Melodic vs. Harmonic
Often an overlooked tool amongst guitarists, but commonplace in the improv community is the interplay between the song's melody variations and lick-based improvisation.
Every tune has a story - even the ones without lyrics. Your goal as an improviser is to tell YOUR version of the song's story.
Song Analysis: Nobody Knows You
You like what you like... But WHY? What makes one artist resonate more than others? If you spend time finding out how your favorites tell their story, it will help you become a better storyteller of your own.
Series Conclusion
We've come a long way in this series! Join Michael as he wraps up the series and gives some closing advice for what's next.
Practical Rhythm Guitar
Learn how to be a reliable guitar player with your band mates! Join Michael Palmisano as he walks us through a myriad of genres and practical advice for being a solid band member. From when the rhythmical "hit" is to when to use triads, this course will leave you ready to hit the stage like a rock star!
In this first lesson, Michael gives as an overview of what he will be teaching in, "Practical Rhythm Guitar."
Feeling The Beat
In this lesson, Michael begins laying the groundwork of his course by teaching you how to feel the beat in a song.
Feeling the Off Beat
In this lesson, Michael continues laying the groundwork of his course by teaching you how to feel the off beat in a song.
Feeling the Triplet
In this lesson, Michael continues laying the groundwork of his course by teaching you how to feel the every third, or "Triplet," beat in a song.
Subdividing in 16th Notes
In this lesson, Michael shows how deep groove comes from accentuating the 16th notes and the triplets, or pieces of each.
Counting in the Band
In this lesson, Michael explores some of the issues that arise when starting a song, and how hearing the songs melody or chorus and hearing the subdivisions will keep you in time every time.
In this lesson, Michael outlines the idea of primary chords as the open chords or the starting point of rhythm and harmony.
Power Chords
In this lesson, Michael talks about the two main kinds of power chords, and how they are meant to be in the front of the mix and the driver of the tune. Often riff based, these power chords are how you must push with the drummer.
Movable Chord Shapes
In this lesson, Michael explores the most common voicings of barre chords. These include major patterns 1, 2, 3, 4, minor 2, 4, and dominant 1, 2, 4, 5.
Major Triads
In this lesson, Michael shows how the triad is the key to achieving proper guitar arrangements. In this lesson, he focuses specifically on the major triad.
Minor Triads
This lesson builds upon the lesson before it. Michael continues to introduce the idea of triads and inversions, state their incredible importance in arrangements, and focus on minors.
Diminished Triads
In this lesson, Michael continues to introduce the idea of triads and inversions, state their incredible importance in arrangements, and focus on diminished triads.
Arrangements - Primary Part
In this lesson, Michael expands the concept of multiple instruments and the guitarist's role by practicing alternating strumming in open positions and barre chords.
Arrangements - Middle Parts
In this lesson, Michael explains the concept of middle parts of a tune, and explain why they're important- triads are the key here.
Arrangements - Horn Parts
In this lesson, Michael explains the function of horns in a mix, and how it's very common for guitarists to play these.
Matching the Bass Line
IN this lesson, Michael demonstrates how sometimes, especially in blues and reggae, it makes sense to have a matching bass part. This is almost never the primary part, but adds to the overall groove thickness.
The VMP
In this lesson, Michael explains the concept of pedal sounds, and how it can add to the groove and the mood of a song.
Voice Leading
In this lesson, Michael demonstrates how to connect triads in progressions to match bass lines.
In this lesson, Michael explains common blues forms and chords.
Blues Tones and Songs
In this lesson, Michael explains the difference between clean and dirty blues tones.
In this lesson, Michael explains the common feel of funk.
Funk Tones and Songs
In this lesson, Michael explains how to get that funky feeling with your guitar and what artists and songs to listen to for inspiration.
In this lesson, Michael explains what it takes to get that country feel.
Country Tones and Songs
In this lesson, Michael explains how to get that country feeling with your guitar and what artists and songs to listen to for inspiration.
In this lesson, Michael explains common rock chords and sounds.
Rock Tones and Songs
In this lesson, Michael explains how to get that classic rock sound and which artists and songs to listen to for inspiration.
In this lesson, Michael explains the basics of rhythm in reggae music.
Reggae Tones and Songs
In this lesson, Michael explains how to get a reggae tone and which songs and artists to listen to for inspiration in this genre.
The Ballad
In this lesson, Michael explains common ballad themes.
The Ballad Tones and Songs
In this lesson, Michael explains how to get that classic ballad sound and which artists and songs to listen to for inspiration.
In this lesson, Michael explains common jazz chords and sounds.
Jazz Tones and Songs
In this lesson, Michael explains how to get that jazz feel with your guitar and what artists and songs to listen to for inspiration.
In this lesson, Michael explains the basics of rhythm in soul music.
Your Jam Band Toolkit
The Jam Band genre has turned into the center of extended improv and creativity when it comes to guitar playing. From bands like the Grateful Dead, The Allman Brothers Band, and Phish, we have heard some of the most interesting and amazing long form solos over the course of the last 50 years. You may ask yourself - how do I get my playing to that place? Knowing a bunch of scales and arpeggio exercises doesn't always cut it in this genre. The key to keeping your long solos interesting is learning how to create melodies. Michael Palmisano brings his vast guitar knowledge to the table to teach you subtle yet effective techniques for creating melodies to build your solos around.
Get ready to jam! In this introduction, Michael takes you through some of the key concepts in this series.
Major Scale Formula
First, let's review some basics. The foundation of everything we'll learn in this course is the major scale. After you memorize this simple formula, you'll be able to play the scale from anywhere on the guitar.
Chords in the Major Scale
Every note, or degree of the major scale has a chord associated with it. In this lesson, Michael shows us another formula for learning these chords, which will be the building blocks for future chord progressions.
Essential Major Triads
Now it's time to start distilling our full chord shapes down to smaller chord shapes or triads, which are more manageable, both audibly and physically.
Essential Minor Triads
We focused on the major triads in the previous lesson, now it's time to look at the minor shaped triads.
Your First Jam with Connecting Triads
In this lesson, Michael looks at connecting some of the triads we've learned in the previous lessons. You'll see how the root, 1st inversion and 2nd inversion triads are all within easy connection range of each other. All of this takes place over your first jam track in this course!
Middle String Set Triads
Now let's move these triads off of the high set of strings (E, B and G) and move them to the middle set of strings (B, G and D). Although the shapes will vary a bit from the high set, you'll see the connection points stay the same. Combine these with the high set, and these will make up most of the chord shapes you'll need to know to play this music!
Middle String Triad Jam
Let's connect the middle set triads in this lesson. Michael leads us over a jam track in which we'll see how to connect the root, 1st and 2nd inversion chord shapes up the neck.
High String Dyads
Distilling down our chord shapes even more now, we go from triads to dyads. This allows us to outline the chords with just the notes that define the function of the chord, in this case, the root and the third. First we'll take a look at inversions on the high string set.
Middle Set Dyads
In lesson 10 Michael continues his study on dyads. This time you're looking at them from the middle string set.
5 and 3 String Set Dyads
We can even use the dyads on lower strings. This can give a rich bassy sound when you want it, and the dyad principle is still the same as in the other lessons!
Your First Major Melody
This lesson will harken back to our major scale we learned in the first lesson. Believe it or not, you can make memorable melodies using only 5 notes in the major scale. Michael demonstrates how in this lesson.
Your First Minor Melody
Let's move to our first minor melody. If you know your relative minors from the chord scale we learned earlier (the minor 6 chord), this will be a piece of cake to understand. You will be able to use the same melody and fingering used in the major melody lesson with the proper adjustment.
Connecting Roots in a Progression
In this lesson, Michael gets you playing over a progression. The concept is simple: look for the root notes around the neck on each chord in the progression. Then, you can start to build other notes off of those root notes. Before you know it, you've got a melody going!
Connecting Thirds and Roots
Now we're going to move on to the next note that makes up a chord - the third. Michael shows us how to identify all the thirds in a string set, so that we can access them whenever we want to. Then, we'll see how we can easily connect them to our roots!
Connecting Fifths, Thirds and Roots
While it doesn't define the function of the chord, the fifth is nonetheless a very integral part of how the chord sounds. Now we'll identify those fifths around the neck, and learn to connect them to our roots and our thirds.
Adding the Missing Note
As you start to see the notes of our chord tones come together on the fretboard, it begs the question, what about the other notes in the major scale? Michael does some detective work in this lesson to learn the identity of our missing note!
Adding Chromatics
Now we begin to add new notes that are not found in our chord tones. These are called chromatic notes. They are in essence, the notes in between our chord tones, and can be used to connect them. You can hear great examples of this all the time in Jerry Garcia's playing.
Combining Melodies and Licks
We've taken a look at what will be the building blocks of our solos, chord tones and their functions. We've added chromatics, and have a nice tool kit to draw from. Michael now leads us to one of his favorite techniques, combining melodies we create from the tools in our tool kit, and licks that we already know.
Pro Tip: Your Six is Your Four
Consider this sort of a guitar hack. The sixth note of the scale you're in implies the 4 chord. How does that work you ask? Michael breaks it all down in this lesson!
Pro Tip: The Five Chord
The five chord is a chord that demands resolution. That's because it contains a leading tone that wants to go back to the root. In this lesson, Michael analyzes that tension, what the leading tone is, and how it relates to all that we've learned so far.
Adding Melodies to the Bass
As a rhythm player, mimicking the melody on the bass strings of your guitar can be a very effective technique in the Jam Band genre. Michael gives us some examples of how this concept is executed.
Adding Melodies to Triads
We've added melodies on the low end of the guitar, now let's look at adding them around the triads that we play on the higher end of the guitar. Michael uses our now familiar chord shapes and explores putting melodies in the spaces around them.
Adding Melodies to Dyads
Now we are going to add melodies to our dyads. These melodies, when connected to dyads, bring more of a lead guitar sound to your playing. Michael shows us some simple ways to integrate this cool technique into our playing.
Essential Progressions Part 1
It's time to put the tools in our tool kit to work on some progressions. Michael will play ideas over the track, then you will have a chance to use some of his ideas to create your own solos and rhythms. The first progression is a track in the style of Althea.
This track contains the classic progression of I-b7-IV. Connecting chord tones and using dyads are just a couple of techniques Michael uses in this lesson.
Now we move to the Am-D7 progression. Start with big chords, then distill them down to our triads and dyads, then we'll begin to create melodies from our chord tone knowledge.
Now we look at the classic sounding track that we have used a few times in the course already. The chord progression is simply a B major chord to an A major chord. Be sure to use all the techniques we've covered: full chords, triads, dyads then create melodies.
This progression is reminiscent of Franklin's Tower. Again, use your full chords then incrementally break them down into triads, dyads, then play melodies that will connect the chord tones. Good luck!
A Final Challenge
This last progression will present the challenge of throwing in a diminished chord. Michael uses all the techniques we've learned so far, but also specifically shows us how and what to play over the diminished chord when it comes around.
30 Essential Jam Band Licks
Delve into 30 of the most influential licks in the Jam Band genre. These licks were inspired by the greatest bands and greatest players. Combine this series with Miachael Palmisano's "Your Jam Band Toolkit" course to complete your jam band learning.
Michael Palmisano gives an overview of the techniques and concepts he will cover in this course.
Chromatic Hammer-Ons to the Third
This lick features a Jerry Garcia inspired chromatic run, and uses hammer-ons for added effect. Incorporate this upbeat, happy lick into your solos.
Mixin' it Up
We examine a cool lick that features a B Mixolydian sound, and incorporates smooth slides and soulful vibrato. This lick can easily be moved between different chord positions for added versatility.
Pushing the Pedals
This B Mixolydian lick has a country flavor inspired by the pedal steel. It doesn't have a flat 7, which helps create a bright, major vibe.
Arpeggiated Blue Sky
This lick features a fast arpeggiated run that has a strong melodic structure and can easily be used over any Mixolydian progression.
Moving to the Top
It's time to learn a sequenced three note chromatic line in the style of the legendary Jerry Garcia. This lick is peppy, fast and perfect for playing over any Mixolydian progression.
Fun Mixolydian Slides
This lick makes use of slick melodic lines that make it one of Michael's favorites. With it's combination of tactful anchors and exuberant slides, it is not only physically fun to play, but also features a bright, energetic sound.
Walkin' On Down
Licks and melodic runs are great, but to make things truly interesting you need to change things up. This rhythmic piece can act as the perfect counterbalancing force in your playing.
Connecting Through Confusion
This lick is a beautiful melodic run that combines both technique and feel to create a line that truly speaks.
Country Style Bending Lick
This quick lick has a distinctly country vibe and features soulful bends and vibrato.
Ascending Pentatonic Lick
This happy sounding lick ascends the neck using the major pentatonic scale, ultimately concluding with some solid vibrato.
Chicken Pickin Dyads
This fun country style lick makes liberal use of chicken pickin' and dyads.
Walk Up the Double Stop
This bright lick ascends the guitar neck using simple, yet effective double stops.
This bassline run helps to create tension while simultaneously adding meat to the low end of the progression.
Texture Groove Lick
Add some texture to your chord progressions using this palm-muted rhythmic pattern.
Dorian Slide
This fun melody based lick makes liberal use of slides and vibrato to convey a deep sense of emotion.
Sleepwalk From C to E
This mysterious sounding lick slides up the scale, and then back down to the tonic. Afterwards it switches to an accented rhythm section to add variety and soul.
Soulful Dyads
Join Michael as he explores a lick that makes liberal use of dyads. This added little bit of soul and mystery really sets this lick apart.
Slow and Simple
This deceivingly simple lick sounds and feels quite complicated, but in reality is only three notes accentuated by big bends and soulful vibrato.
Saturday Night Streets
This lick starts out with a simple melody and ends by outlining the chords using simple shapes. It's slow, slick and filled with delicious vibrato.
Distracted Focus
Combining one part melody and one part arpeggio, this lick really showcases the goodness that can happen when you combine the two.
Funky Things
This fun and funky lick uses elements of the classic blues box pentatonic scale while adding a few extra melodic notes. Add in some quick bends, fast vibrato and you have a recipe for a great sound.
Forever Ascending
Ascending up the neck is fun, and that's what this lick is all about. This is a great way to learn to move from open position up the neck.
Chromatic Adventure
This fast chromatic lick moves up the neck and across the fretboard, before ending in a slow, vibrato filled line.
Pump the Bass
This bassy, double stop infused rhythm lick is perfect for breaking your solos out of the top half of the neck.
Keepin' it Simple
This slow, simple lick gently ascends the neck with tasteful slides, before descending back down and culminating in calming vibrato.
Country Road Bends
This country-style lick aims to emulate the pedal steel. It moves up and down the neck, makes use of tight bends, and uses a measured vibrato to round it all out.
Summer Nights and Fireflies
Using measured timing, slow notes, controlled slides and warm vibrato, this lick will have you dreaming of warm summer nights under the stars.
Green Grass and Blue Skies
This singing, melodic line doesn't contain all that many notes, but achieves a warm, soulful sound by letting the vibrato speak.
This lick uses double stops that move up and down the neck to create the sounds of home.
Flying High Pentatonic
Played on the upper registers of the guitar, this lick uses the pentatonic scale, vibrato and tasteful hammer-ons and pull-offs to create it's high-flying sound.
Learn more about Michael Palmisano
Award Winning GIT Grad, Founder of PRS Sponsored Guitargate.com, 70k+ students worldwide, and Lead Guitarist in What's Next - Voted 5x "Best Band" in Baltimore, MD. I have been playing guitar since I was 5, and gigging and teaching professionally for the last 15+ years. I genuinely love to teach, and my goal is to help people learn the "why" behind the notes, bridge the gap between rhythm and lead guitar, and ultimately use these tools to improvise and get the stuff in your head out! On a side note, it's truly incredible that the internet has allowed us to connect from all over the world. I believe in online education, and I strive to be as helpful as possible. If I can help you in any way, please don't hesitate to email me with a question or a video for feedback - I respond to each and every message. We're in this together!
Get access to all guitar lessons from Michael Palmisano along with our full roster of guitar teachers.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,029
|
\section{Introduction}
In reinforcement learning an agent seeks to learn a high-reward policy
for selecting actions in a stochastic world without
prior knowledge of the world dynamics model and/or reward
function. In this paper we consider
when the agent is provided with an input set of potential policies,
and the agent's objective is to perform as close as possible
to the (unknown) best policy in the set. This scenario
could arise when the general domain involves a finite set
of types of RL tasks (such as different user models),
each with known best policies, and the
agent is now in one of the task types but doesn't know which one.
Note that this situation could occur both in discrete state and
action spaces, and in continuous state and/or action spaces:
a robot may be traversing one of a finite set of different
terrain types, but its sensors don't allow it to identify the
terrain type prior to acting. Another example is when the
agent is provided with a set of domain expert defined policies,
such as stock market trading strategies. Since the agent
has no prior information about which policy might perform
best in its current environment, this remains a challenging
RL problem.
Prior research has considered the related case when
an agent is provided with a fixed set of input (transition
and reward) models, and the current domain is an (initially
unknown) member of this
set~\cite{Dyagilev2008EWRL,DiukICML2009,BrunskillAAMAS2012}.
This actually provides the agent with more information than
the scenario we consider (given a model we can extract a
policy, but the reverse is not generally true), but more
significantly, we find substantial theoretical and
computational advantages from taking a model-free approach.
Our work is also closely related to the idea of policy
reuse~\cite{FernandezAAMAS2006}, where an agent tries to leverage
prior policies it found for past tasks to improve performance
on a new task; however, despite encouraging empirical
performance, this work does not provide any formal guarantees.
Most similar to our work is Talvitie and Singh's~\cite{talvitieIJCAI2007}
AtEase algorithm which also learns to select among an input set of
policies; however, in addition to algorithmic differences,
we provide a much more rigorous
theoretical analysis that holds for a more general
setting.
We contribute a reinforcement learning with policy advice (RLPA) algorithm.
RLPA is a model-free algorithm that, given an
input set of policies, takes an optimism-under-uncertainty
approach of adaptively selecting the policy that may have the
highest reward for the current task. We prove the regret
of our algorithm relative to the (unknown) best in the
set policy scales with the square root of the time horizon,
linearly with the size of the provided policy set, and
is independent of the size of the state and
action space. The computational complexity of our algorithm
is also independent of the number of states and actions.
This suggests our approach may have significant benefits
in large domains over alternative approaches that typically scale with
the size of the state and action space, and our
preliminary simulation experiments provide empirical
support of this impact.
\section{Preliminaries}
A Markov decision process (MDP) $M$ is defined as a tuple $\langle \mathcal{S}, \mathcal A, P, r\rangle$ where $\mathcal{S}$ is the set of states, $\mathcal A$ is the set of actions, $P:\mathcal{S}\times\mathcal A\rightarrow \mathcal P(\mathcal{S})$ is the transition kernel mapping each state-action pair to a distribution over states, and $r:\mathcal{S}\times\mathcal A \rightarrow \mathcal P([0,1])$ is the stochastic reward function mapping state-action pairs to a distribution over rewards bounded in the $[0,1]$ interval.\footnote{The
extension to larger bounded regions $[0,d]$ is trivial and just
introduces an additional $d$ multiplier to the resulting regret bounds.}
A policy $\pi$ is a mapping from states to actions.
Two states $s_i$ and $s_j$ communicate with each other under policy $\pi$ if
the probability of transitioning between $s_i$ and $s_j$ under $\pi$ is
greater than zero. A state $s$ is recurrent under policy $\pi$ if the probability
of reentering state $s$ under $\pi$ is 1.
A recurrent class is a set of
recurrent states that all communicate with each other and no other states. Finally, a Markov process is unichain if its transition matrix consists of a single recurrent class with (possibly) some transient states \cite[Chap. 8]{puterman1994markov}.
We define the performance of
$\pi$ in a state $s$ as its expected average reward
\begin{equation}\label{eq:avg.reward}
\mu^{\pi}(s)=\lim_{T\rightarrow\infty} \frac{1}{T} \mathbb E\bigg[\sum\nolimits_{t=1}^T r(s_t,\pi(s_t))\bigg|s_0=s\bigg],
\end{equation}
where $T$ is the number of time steps and the expectation is
taken over the stochastic transitions and rewards.
If $\pi$ induces a unichain Markov process on $M$, then $\mu^\pi(s)$
is constant over all the states $s\in\mathcal{S}$, and we can
define the bias function $\lambda^\pi$ such that
\begin{align}\label{eq:bias}
\lambda^\pi(s) + \mu^\pi = \mathbb E\big[r(s,\pi(s)) + \lambda^\pi(s')\big].
\end{align}
Its corresponding span is
$sp(\lambda^\pi) = \max_{s} \lambda^\pi(s) - \min_{s} \lambda^\pi(s)$. The bias $\lambda^\pi(s)$ can be seen as the total difference between the reward of state $s$ and average reward.
In reinforcement learning~\cite{Sutton98} an agent does
not know the transition $P$ and/or reward $r$ model in advance. Its
goal is typically to find a policy $\pi$ that maximizes its obtained reward.
In this paper, we consider reinforcement learning in an MDP $M$
where the learning algorithm is
provided with an input set of $m$ deterministic
policies $\Pi=\{\pi_1,\ldots,\pi_m\}$.
Such an input set of policies could arise
in multiple situations, including: the policies may represent
near-optimal policies for a set of $m$
MDPs $\{M_1,\ldots,M_m\}$ which may be related to the current MDP $M$;
the policies may be the result of different approximation schemes (i.e., approximate policy iteration with different approximation spaces); or they may be provided by $m$ advisors. Our objective is to perform almost as well as
the best policy in the input set $\Pi$ on the new
MDP $M$ (with unknown $P$ and/or $r$).
Our results require the following mild assumption:
\begin{asm}\label{asm:best.policy}
There exists a policy $\pi^+\in\Pi$, which induces a unichain Markov process on the MDP $M$, such that the average reward $\mu^+= \mu^{\pi^+} \geq \mu^\pi(s)$ for any state $s\in\mathcal{S}$ and any policy $\pi\in\Pi$.
We also assume that $sp(\lambda^{\pi^+})\leq H$, where $H$ is a finite constant.\footnote{One can easily prove that the upper bound $H$ always exists for any unichain Markov reward process (see \cite[Chap. 8]{puterman1994markov}).}
\end{asm}
This assumption trivially holds when the optimal policy $\pi^*$ is in the set $\Pi$. Also, in those cases that all the policies in $\Pi$ induce some unichain Markov processes the existence of $\pi^+$ is guaranteed.\footnote{Note that Assumption \ref{asm:best.policy} in general is a weaker assumption than assuming MDP $M$ is ergodic or unichain,
which would require that the induced Markov chains under \emph{all} policies be
recurrent or unichain, respectively: we only require that the best policy
in the input set must induce a unichain Markov process.}
A popular measure of the performance of a reinforcement learning algorithm
over $T$ steps is its regret relative to executing the
optimal policy $\pi^*$ in $M$.
We evaluate the regret relative to the best policy $\pi^+$ in the
input set $\Pi$,
\begin{align}\label{eq:regret}
\Delta(s) = T\mu^{+}-\sum\nolimits_{t=1}^T r_t,
\end{align}
where $r_t\sim r(\cdot|s_{t},a_{t})$ and $s_0=s$. We notice that this definition of regret differs from the standard definition of regret by an (approximation) error $T(\mu^*-\mu^+)$ due to the possible sub-optimality of the policies in $\Pi$
relative to the optimal policy for MDP $M$. Further discussion on this definition is provided in Sec.~\ref{s:conclusions}.
\section{Algorithm}\label{s:algorithm}
\begin{algorithm}[t!]
\caption{Reinforcement Learning with Policy Advice (RLPA) }
\label{alg:B.MDP}
\begin{algorithmic}[1]
\Require Set of policies $\Pi$, confidence $\delta$, span function $f$
\State Initialize $t=0$, $i=0$
\State Initialize $n(\pi)=1$, $\widehat{\mu}(\pi)=0$, $R(\pi)=0$ and $K(\pi)=1$ for all $\pi\in\Pi$
\While {$t \leq T$}
\State Initialize $t_i = 0, \;T_i=2^i, \;\Pi_i=\Pi,\; \widehat H=f(T_i)$
\State $i=i+1$
\While { $t_i \leq T_i$ \& $\Pi_i\neq\emptyset$ } \textbf{\textit{(run trial)}}
\State $c(\pi) = (\widehat H+1) \sqrt{48\frac{\log (2t/\delta)}{n(\pi)}}+\widehat H \frac{K(\pi)}{n(\pi)}$
\State $B(\pi) = \widehat{\mu}(\pi) + c(\pi)$
\State $\widetilde{\pi}=\arg\max_{\pi} B(\pi)$
\State $v(\widetilde\pi)=1$
\WhileNoDo{$t_i\leq T_i$ \& $v(\widetilde{\pi})\!<\! n(\widetilde{\pi})$ \&}
\State $\widehat{\mu}(\widetilde{\pi})-\frac{R(\widetilde \pi)}{n( \widetilde \pi)+v( \widetilde \pi)} \!\leq\!\! c(\widetilde{\pi}) \!+\! (\widehat H\!+\!1)\sqrt{48\frac {\log(2t/\delta)}{n( \widetilde \pi)+v(\widetilde \pi)}} + \widehat H \frac{K(\widetilde\pi)}{n(\widetilde\pi)+v(\widetilde \pi)}$ \algorithmicdo
\State \textbf{\textit{(run episode)}}
\State $t=t+1$, $t_i = t_i + 1$
\State Take action $\widetilde{\pi}(s_t)$, observe $s_{t+1}$ and $r_{t+1}$
\State $v(\widetilde \pi)=v(\widetilde \pi)+1$ , $R(\widetilde \pi)= R(\widetilde \pi)+r_{t+1}$
\EndWhile
\State $K(\widetilde\pi)=K(\widetilde\pi)+1$
\If {$\widehat{\mu}(\widetilde{\pi})-\frac{R(\widetilde \pi)}{n(\widetilde \pi)+v( \widetilde \pi)} > c(\widetilde{\pi}) + (\widehat H\!+\!1)\sqrt{48\frac {\log(2t/\delta)}{n( \widetilde \pi)+v(\widetilde \pi)}}+\widehat H \frac{K(\widetilde\pi)}{n(\widetilde\pi)+v(\widetilde \pi)}$}
\State $\Pi_i=\Pi_i-\{\widetilde\pi\}$
\EndIf
\State $n(\widetilde \pi)=n(\widetilde \pi)+v(\widetilde \pi)$ , $\widehat{\mu}(\widetilde{\pi}) = \frac{R(\widetilde{\pi})}{n(\widetilde \pi)}$
\EndWhile
\EndWhile
\end{algorithmic}
\end{algorithm}
In this section we introduce the Reinforcement Learning with Policy Advice (RLPA) algorithm (Alg.~\ref{alg:B.MDP}).
Intuitively, the algorithm seeks to identify and use the policy in the
input set $\Pi$ that yields the highest average reward on the
current MDP $M$. As the average reward of each $\pi \in \Pi$ on
$M$, $\mu^{\pi}$, is initially unknown, the algorithm proceeds by estimating
these quantities by executing the different $\pi$ on the current
MDP. More concretely, RLPA executes a series of trials, and within each trial is a series of episodes.
Within each trial the algorithm selects the policies in $\Pi$ with the objective of effectively balancing between the exploration of all the policies
in $\Pi$ and the exploitation of the most promising ones.
Our procedure for doing this falls within the popular class
of ``optimism in face uncertainty'' methods. To do this,
at the start of each episode, we
define an upper bound on the possible average reward of
each policy (Line 8): this average reward is computed as a
combination of the average reward observed so far for this
policy $\hat{\mu}(\pi)$, the number of time steps this
policy has been executed $n(\pi)$ and $\widehat{H}$, which
represents a guess of the span of the best policy, $H^+$.
We then select the policy with the
maximum upper bound $\widetilde{\pi}$ (Line 9) to run for this episode.
Unlike in multi-armed bandit settings where a
selected arm is pulled for only one step, here the
MDP policy is run for up to $n(\pi)$ steps,
i.e., until its total number of execution steps is at most doubled.
If $\widehat{H} \geq H^+$ then the confidence
bounds computed (Line 8) are valid confidence intervals
for the true best policy $\pi^+$; however, they may
fail to hold for any other policy $\pi$ whose span
$sp(\lambda^\pi) \geq \widehat{H}$. Therefore, we
can cut off execution of an episode when these
confidence bounds fail to hold (the condition specified
on Line 12), since the policy may not be an optimal
one for the current MDP, if $\widehat{H} \geq H^+$.\footnote{See Sec. \ref{ss:gap.independent} for further discussion on the necessity of the condition on Line 12.}
In this case, we can eliminate the current policy
$\widetilde{\pi}$ from the set of policies considered in this
trial (see Line 20). After an episode terminates,
the parameters of the current policy $\widetilde{\pi}$
(the number of steps $n(\pi)$ and average reward $\widehat{\mu}(\pi)$) are updated,
new upper bounds on the policies are computed, and the next
episode proceeds. As the average reward estimates converge,
the better policies will be chosen more.
Note that since we do not know $H^+$ in advance, we
must estimate it online: otherwise, if
$\widehat{H}$ is not a valid upper bound for the span $H^+$ (see Assumption~\ref{asm:best.policy}), a trial might eliminate
the best policy $\pi^+$, thus incurring a significant regret.
We address this by successively
doubling the amount of time $T_i$ each trial is run, and defining
a $\widehat{H}$ that is a function $f$ of the current trial length. See
Sec.~\ref{ss:gap.independent} for a more detailed discussion on the choice of $f$. This procedure guarantees the algorithm will
eventually find an upper bound on the span $H^+$
and perform trials with very small regret in high probability. Finally, RLPA is an anytime algorithm since it does not need to know the time horizon $T$ in advance.
\iffalse
We refer to each round of the outer-while as a \textit{trial} (lines 6-25) and each round of the inner-while as an
\textit{episode} (lines 11-18). For each policy $\pi\in\Pi$, the algorithm keeps a counter of the number of steps the policy has been executed ($n(\pi)$), and the empirical average of its corresponding rewards ($\widehat{\mu}(\pi)$). At the beginning of each trial $i$, the algorithm defines its maximum length $T_i = 2^i$ and a guess for the span $\widehat{H}$ determined by the increasing span function $\widehat{sp}$.
Nonetheless, the $B$-values computed as the sum of the empirical estimate of the average reward $\widehat{\mu}(\pi)$ and the confidence term $c(\pi)$ are not necessarily valid upper confidence bounds whenever $\widehat{H}$ is not a valid upper bound on the span $sp(\lambda^\pi)$ (see Lem.~\ref{lem:conc:Markov}). This motivates the use of an additional stopping condition for the episodes (last condition at Line 12), where a policy is interrupted whenever the empirical estimate $\widehat{\mu}(\pi)$ differs from the current average reward $R(\pi)/v(\pi)$ observed in the current episode by more than expected according to the confidence interval $c(\pi)+(\widehat H+1)\sqrt{24\frac {\log(t/\delta)}{v(\widetilde \pi)}}$.~\footnote{\TODO{Alex: Mohammad prefers a different interpretation/formulation for this point, so feel free to change it accordingly.}} Whenever this condition is met, the episode is terminated and the policy $\widetilde{\pi}$ is discarded from $\Pi_i$.
\fi
\section{Regret Analysis}\label{s:regret}
In this section we derive a regret analysis of RLPA and we compare its performance to existing RL regret minimization algorithms. We first derive preliminary results used in the proofs of the two main theorems.
We begin by proving a general high-probability bound on the difference between average reward $\mu^{\pi}$ and its empirical estimate $\widehat{\mu}(\pi)$
of a policy $\pi$ (throughout this discussion we mean the average
reward of a policy $\pi$ on a new MDP $M$).
Let $K(\pi)$ be the number of episodes $\pi$ has been run, each of them of length $v_k(\pi)$ ($k=1,\ldots,K(\pi)$). The empirical average $\widehat{\mu}(\pi)$ is defined as
\begin{align}\label{eq:empirical.average}
\widehat{\mu}(\pi) = \frac{1}{n(\pi)} \sum\nolimits_{k=1}^{K(\pi)} \sum\nolimits_{t=1}^{v_k(\pi)} r_t^k,
\end{align}
where $r_t^k \sim r(\cdot | s_t^k, \pi(s_t^k))$ is a random sample of the reward observed by taking the action suggested by $\pi$ and $n(\pi) = \sum_k v_k(\pi)$ is the total count of samples. Notice that in each episode $k$, the first state $s_1^k$ does not necessarily correspond to the next state of the last step $v_{k-1}(\pi)$ of the previous episode.
\begin{lemma} \label{lem:conc:Markov}
Assume that a policy $\pi$ induces on the MDP $M$ a single recurrent class with some additional transient states, i.e., $\mu^{\pi}(s)=\mu^{\pi}$ for all $s\in\mathcal{S}$. Then the difference between the average reward and its empirical estimate (Eq.~\ref{eq:empirical.average}) is
\begin{equation*}
|\widehat{\mu}(\pi)- \mu^{\pi}| \leq 2(H^{\pi}+1)\sqrt{ \dfrac{2\log(2/\delta)}{n(\pi)}} + H^\pi \dfrac{K(\pi)}{n(\pi)},
\end{equation*}
with probability $\geq 1-\delta$, where $H^\pi = sp(\lambda^\pi)$ (see Eq.~\ref{eq:bias}).
\end{lemma}
\begin{small}
\begin{proof}
Let $r_{\pi}(s_t^k)= \mathbb E (r_t^k|s_t^k,\pi(s_t^k))$, $\epsilon_r(t,k)=r_t^k-r_{\pi}(s_t^k)$, and $P^{\pi}$ be the state-transition kernel under policy $\pi$ (i.e. for finite state and action spaces, $P^{\pi}$ is the $|S| \times |S|$ matrix where the $ij$-th entry is $p(s_j|s_i,\pi(s_i))$).
Then we have
\begin{align*}
\widehat{\mu}(\pi)- \mu^{\pi}&= \frac{1}{n(\pi)} \bigg(\sum_{k=1}^{K(\pi)}\sum_{t=1}^{v_k(\pi)} (r_t^k-\mu^{\pi}) \bigg) =\frac{1}{n(\pi)} \bigg(\sum_{k=1}^{K(\pi)}\sum_{t=1}^{v_k(\pi)} (\epsilon_r(t,k)+r_{\pi}(s_t^k)-\mu^{\pi})\bigg) \\
&=\frac{1}{n(\pi)} \bigg( \sum_{k=1}^{K(\pi)}\sum_{t=1}^{v_k(\pi)} (\epsilon_r(t,k)+\lambda^{\pi}(s_t^k)-P^{\pi}\lambda^{\pi}(s_t^k)) \bigg),
\end{align*}
where the second line follows from Eq.~\ref{eq:bias}.
Let $\epsilon_{\lambda}(t,k)=\lambda^{\pi}(s_{t+1}^k)-P^{\pi}\lambda^{\pi}(s_t^k)$. Then we have
\begin{align*}
\widehat{\mu}(\pi)- \mu^{\pi}&= \frac{1}{n(\pi)} \bigg( \sum_{k=1}^{K(\pi)}\sum_{t=1}^{v_k(\pi)} (\epsilon_r(t,k)+\lambda^\pi(s_{t+1}^k) - \lambda^\pi(s_{t+1}^k) + \lambda^{\pi}(s_t^k)-P^{\pi}\lambda^{\pi}(s_t^k)) \bigg) \\
&\leq\frac{1}{n(\pi)} \bigg(\sum_{k=1}^{K(\pi)}(H^{\pi}+\sum_{t=1}^{v_k(\pi)} \epsilon_r(t,k)+\sum_{t=1}^{v_k(\pi)-1}\epsilon_{\lambda}(t,k)) \bigg),
\end{align*}
where we bounded the telescoping sequence
$\sum_t (\lambda^{\pi}_{s_t^k} - \lambda^\pi(s_{t+1}^k)) \leq sp(\lambda^\pi) = H^\pi$.
The sequences of random variables $\{\epsilon_r\}$ and $\{\epsilon_{\lambda}\}$, as well as their sums, are martingale difference sequences.
Therefore we can apply Azuma's inequality and obtain the bound
\begin{align*}
\widehat{\mu}(\pi)- \mu^{\pi}&\leq\dfrac{ K(\pi) H^{\pi}+2\sqrt{2 n(\pi) \log(1/\delta)} + 2H^{\pi}\sqrt{ 2(n(\pi)-K(\pi))\log(1/\delta)}}{n(\pi)}
\\
&\leq H^\pi \dfrac{K(\pi)}{n(\pi)}+2(H^{\pi}+1)\sqrt{ \dfrac{2\log(1/\delta)}{n(\pi)}},
\end{align*}
with probability $\geq 1-\delta$, where in the first inequality we bounded the error terms $\epsilon_r$, each of which is bounded in $[-1,1]$, and $\epsilon_\lambda$, bounded in $[-H^\pi,H^\pi]$. The other side of the inequality follows exactly the same steps. \qed
\end{proof}
\end{small}
In the algorithm $H^\pi$ is not known and at each trial $i$ the confidence bounds are built using the guess on the span $\widehat{H} = f(T_i)$, where $f$ is an increasing function.
For the algorithm to perform well, it needs to not
discard the best policy $\pi^+$ (Line 20). The following lemma guarantees that after a certain number of steps, with high probability the policy $\pi^+$ is not discarded in any trial.
\begin{lemma}\label{lem:inc.pi}
For any trial started after $T \geq T^+ = f^{-1}(H^{+})$, the probability of policy $\pi^+$ to be excluded from $\Pi_A$ at anytime is less than $(\delta/T)^{6}$.
\end{lemma}
\begin{small}
\begin{proof}
Let $i$ be the first trial such that $T_i \geq f^{-1}(H^{+})$, which implies that $\widehat{H}=f(T_i) \geq H^+$. The corresponding step $T$ is at most the sum of the length of all the trials before $i$, i.e., $T \leq \sum_{j=1}^{i-1} 2^j \leq 2^i$, thus leading to the condition $T \geq T^+ = f^{-1}(H^{+})$.
After $T \geq T^+$ the conditions in Lem.~\ref{lem:conc:Markov} (with Assumption~\ref{asm:best.policy}) are satisfied for $\pi^+$. Therefore
the confidence intervals hold with probability
at least $1-\delta$ and we have for $\widehat{\mu}(\pi^+)$
\begin{align*}
\widehat{\mu}(\pi^+)- \mu^+ &\leq 2(H^{+} +1)\sqrt{ \dfrac{2\log(1/\delta)}{n(\pi^+)}} + H^+ \dfrac{K(\pi^+)}{n(\pi^+)}
\\&\leq 2(\widehat{H}+1)\sqrt{ \dfrac{2\log(1/\delta)}{n(\pi^+)}} + \widehat{H} \dfrac{K(\pi^+)}{n(\pi^+)},
\end{align*}
where $n(\pi^+)$ is number of steps when policy $\pi^+$ has been selected until $T$. Using a similar argument as in the proof of
Lem.~\ref{lem:conc:Markov}, we can derive
\begin{equation*}
\mu^+-\frac{R(\pi^+)}{n(\pi^+) + v(\pi^+)} \leq 2(\widehat{H}+1)\sqrt{ \dfrac{2\log(1/\delta)}{n(\pi^+) + v(\pi^+)}} + \widehat H \frac{K(\pi^+)}{n(\pi^+)+v(\pi^+)},
\end{equation*}
with probability at least $1-\delta$.
Bringing together these two conditions, and applying the union bound,
we have that the condition on Line12 holds with at least probability
$1-2 \delta$ and thus $\pi^+$ is never discarded.
More precisely Algo.~\ref{alg:B.MDP} uses slightly larger confidence intervals (notably $\sqrt{48\log(2t/\delta)}$ instead of $2\sqrt{2\log(1/\delta)}$), which guarantees that $\pi^+$ is discarded with at most a probability of $(\delta/T)^{6}$. \qed
\end{proof}
\end{small}
We also need the $B$-values (line 9) to be valid upper confidence bounds on the average reward of the best policy $\mu^+$.
\begin{lemma}\label{lem:optimism}
For any trial started after $T \geq T^+ = f^{-1}(H^+)$, the $B$-value of $\widetilde{\pi}$ is an upper bound on $\mu^+
with probability $\geq 1-(\delta/T)^6$.
\end{lemma}
\begin{small}
\begin{proof}
Lem.~\ref{lem:inc.pi} guarantees that the policy $\pi^+$ is in $\Pi_A$ w.p. $(\delta/T)^{6}$. This combined with Lem.~\ref{lem:conc:Markov} and the fact that $f(T)>H^+$ implies that the $B$-value $B(\pi^+) = \widehat{\mu}(\pi^+)+c(\pi^+)$ is a high-probability upper bound on $\mu^+$ and, since $\widetilde{\pi}$ is the policy with the maximum $B$-value, the result follows. \qed
\end{proof}
\end{small}
Finally, we bound the total number of episodes a policy could be selected.
\begin{lemma} \label{lem:episodes}
After $T \geq T^+ = f^{-1}(H^+)$ steps of Algo.~\ref{alg:B.MDP},
let $K(\pi)$ be the total number of episodes $\pi$ has been selected and $n(\pi)$ the corresponding total number of samples, then
\begin{equation*}
K(\pi) \leq \log_2( f^{-1}(H^+))+\log_2(T)+\log_2(n(\pi)),
\end{equation*}
with probability $\geq 1-(\delta/T)^6$.
\end{lemma}
\begin{small}
\begin{proof}
Let $n_k(\pi)$ be the total number of samples at the beginning of episode $k$ (i.e., $n_k(\pi) = \sum_{k'=1}^{k-1} v_{k'}(\pi)$). In each trial of Algo.~\ref{alg:B.MDP}, an episode is terminated
when the number of samples is doubled (i.e., $n_{k+1}(\pi) = 2n_k(\pi)$), or when the consistency condition (last condition on Line12) is violated and the policy is discarded or the trial is terminated (i.e., $n_{k+1} \geq n_k(\pi)$). We denote by $\overline{K}(\pi)$ the total number of episodes truncated before the number of samples is doubled, then $n(\pi)\geq 2^{K(\pi)-\overline{K}(\pi)}$. Since the episode is terminated before the number of samples is doubled only when either the trial terminates or the policy is discarded, in each trial this can only happen once per policy. Thus we can bound $\overline{K}(\pi)$ by the number of trials. A trial can either terminate because its maximum length $T_i$ is reached or when all the polices are discarded (line 6). From Lem.~\ref{lem:inc.pi}, we have that after $T \geq f^{-1}(H^+)$, $\pi^+$ is never discarded w.h.p. and a trial only terminates when $t_i>T_i$. Since $T_i = 2^i$, it follows that the number of trials is bounded by $\overline{K}(\pi) \leq \log_2({f}^{-1}(H^+))+\log_2(T)$. So, we have $n(\pi)\geq 2^{K(\pi)- \log_2({f}^{-1}(H^+))-\log_2(T)}$, which implies the statement of the lemma. \qed
\end{proof}
\end{small}
Notice that if we plug this result in the statement of Lem.~\ref{lem:conc:Markov}, we have that the second term converges to zero faster than the
first term which decreases as $O(1/\sqrt{n(\pi)})$, thus in principle it could be possible to use alternative episode stopping criteria, such as $v(\pi) \leq \sqrt{n(\pi)}$. But while this would not significantly affect the convergence rate of $\widehat{\mu}(\pi)$, it may worsen the global regret performance in Thm.~\ref{thm:gap.independent}.
\subsection{Gap-Independent Bound}\label{ss:gap.independent}
We are now ready to derive the first regret bound for RLPA.
\begin{theorem}\label{thm:gap.independent}
Under Assumption~\ref{asm:best.policy} for any $T \geq T^+=f^{-1}(H^+)$ the regret of Algo.~\ref{alg:B.MDP} is bounded as
\begin{align*}
\Delta(s) \leq 24(f(T)+1)\sqrt{3Tm(\log(T/\delta))}+\sqrt{T}+6f(T)m(\log_2(T^+)+2\log_2(T)),
\end{align*}
with probability at least $1-\delta$ for any initial state $s\in\mathcal{S}$.
\end{theorem}
\begin{small}
\begin{proof}
We begin by bounding the regret from executing each policy $\pi$. We consider the $k(\pi)$-th episode when policy $\pi$ has been selected (i.e., $\pi$ is the optimistic policy $\widetilde{\pi}$) and we study its corresponding
total regret $\Delta_{\pi}$. We denote by $n_k(\pi)$ the number of steps of policy $\pi$ at the beginning of episode $k$ and
$v_k(\pi)$ the number of steps in episode $k$.
Also at time step $T$,
let the total number of episodes,
$v_k(\pi)$ and $n_k$, for each policy $\pi$ be denoted as
$K(\pi)$, $v(\pi)$ and $n(\pi)$ respectively.
We also let $\pi\in\Pi$, $B(\pi)$, $c(\pi)$, $R(\pi)$ and $\widehat{\mu}(\pi)$
be the latest values of these variables at time step $T$
for each policy $\pi$. Let $\mathcal E=\{\forall t=f^{-1}(H^+),\dots,T,\pi^+ \in \Pi_{A} \enspace \& \enspace \widetilde{\pi} \geq \mu^+ \}$ be the event
under which $\pi^+$ is never removed from the set of
policies $\Pi_A$, and where the upper bound of the
optimistic policy $\widetilde{\pi}$, $B(\widetilde{\pi})$, is always as large
as the true average reward of the best policy $\mu^+$.
On the event $\mathcal E$, $\Delta_{\pi}$ can be bounded as
\begin{align*}
\Delta_{\pi}&= \sum_{k=1}^{ K(\pi)}\sum_{t=1}^{ v_k( \pi)} (\mu^+-r_t ) \overset{(1)}{\leq} \sum_{k=1}^{ K(\pi)}\sum_{t=1}^{ v_k( \pi)} ( B(\pi)- r_t )\leq (n(\pi)+v(\pi)) (\widehat{\mu}(\pi)+c(\pi)) - R(\pi)
\\&\overset{(2)}{\leq} (n(\pi)+v(\pi))\left( 3(f(T)+1)\sqrt{48\frac {\log(T/\delta)}{n(\pi)}}+3f(T) \frac{K(\pi)}{n(\pi)}\right)
\\
&\overset{(3)}{\leq} 24(f(T)+1)\sqrt{3 n( \pi)\log (T/\delta)} +6f(T)K(\pi),
\end{align*}
where in (1) we rely on the fact that $\pi$ is only executed when
it is the optimistic policy, and $B(\pi)$ is optimistic
with respect to $\mu^+$ according to Lem.~\ref{lem:optimism}.
(2) immediately follows from the stopping condition at Line 12 and the definition of $c(\pi)$. (3) follows from the condition on doubling the samples (Line 12) which guarantees $v(\pi) \leq n( \pi)$.
We now bound the total regret $\Delta$ by summing over all the policies.
\begin{align*}
\Delta &= \sum_{\pi\in\Pi}24(f(T)+1)\sqrt{3n( \pi)\log(T/\delta)} +6 f(T) \sum_{\pi\in\Pi} K(\pi)
\\
&\overset{(1)}{\leq}
24 (f(T)+1) \sqrt{3m \sum_{\pi\in\Pi}n( \pi)\log(T/\delta)} +6f(T) \sum_{\pi\in\Pi} K(\pi)
\\& \overset{(2)}{\leq}
24 (f(T)+1) \sqrt{3m T \log (T/\delta)} +6f(T) m (\log_2( f^{-1}(H^+))+2\log_2(T)),
\end{align*}
where in $(1)$ we use Cauchy-Schwarz inequality and (2) follows from $\sum_{\pi}n(\pi)\leq T$, Lem.~\ref{lem:episodes}, and
$\log_2(n(\pi)) \leq \log_2(T)$.
Since $T$ is an unknown time horizon, we need to provide a bound which holds with high probability uniformly over all the possible values of $T$. Thus we need to deal with the case when $\mathcal E$ does not hold. Based on Lem. 1 and by following similar lines to~\cite{UCRLAuer}, we can prove that the total regret of the episodes in which the true model is discarded is bounded by $\sqrt{T}$ with probability at least $1-\delta/(12T^{5/4})$.
Due to space limitations, we omit the details, but we can then
prove the final result by
combining the regret in both cases (when $\mathcal E$ holds or
does not hold)
and taking union bound on all possible values of $T$. \qed
\end{proof}
\end{small}
A significant advantage of RLPA over generic RL algorithms
(such as UCRL2) is that the regret of RLPA is independent of
the size of the state and action spaces: in contrast,
the regret of UCRL2 scales as $O(S\sqrt{AT})$.
This advantage is obtained by exploiting the prior information that $\Pi$ contains good policies, which allows the algorithm to focus on testing their performance to identify the best, instead of building an estimate of the current MDP over the whole state-action space as in UCRL2. It is also
informative to compare this result to other methods using some form of prior knowledge. In~\cite{maillard2013optimal} the objective is to learn the optimal policy along with a state representation which satisfies the Markov property. The algorithm receives as input a set of possible state representation models and under the assumption that one of them is Markovian, the algorithm is shown to have a sub-linear regret. Nonetheless, the algorithm inherits the regret of UCRL itself and still displays a $O(S\sqrt{A})$ dependency on states and actions. In~\cite{Dyagilev2008EWRL} the Parameter Elimination (PEL) algorithm is provided with a set of MDPs. The algorithm is analyzed in the PAC-MDP framework and under the assumption that the true model actually belongs to the set of MDPs, it is shown to have a performance which does not depend on the size of the state-action space and it only has a $O(\sqrt{m})$ a dependency on the number of MDPs $m$.\footnote{Notice that PAC bounds are always squared w.r.t. regret bounds, thus the original $m$ dependency in~\cite{Dyagilev2008EWRL} becomes $O(\sqrt{m})$ when compared to a regret bound.} In our setting, although no model is provided and no assumption on the optimality of $\pi^*$ is made, RLPA achieves the same dependency on $m$.
The span $sp(\lambda^\pi)$ of a policy is known to be a critical parameter determining how well and fast the average reward of a policy can be estimated using samples (see e.g.,~\cite{bartlett2009regal}). In Thm.~\ref{thm:gap.independent} we show that only the span $H^+$ of the best policy $\pi^+$ affects the performance of RLPA even when other policies have much larger spans. Although this result may seem surprising (the algorithm estimates the average reward for all the policies), it follows from the use of the third condition on Line12 where an episode is terminated, and a policy is discarded, whenever the empirical estimates are not consistent with the guessed confidence interval. Let us consider the case when $\widehat{H} > H^+$ but $\widehat{H} < sp(\lambda^\pi)$ for a policy which is selected as the optimistic policy $\widetilde{\pi}$. Since the confidence intervals built for $\pi$ are not correct (see Lem.~\ref{lem:conc:Markov}), $\widetilde{\pi}$ could be selected for a long while before selecting a different policy. On the other hand, the condition on the consistency of the observed rewards would discard $\pi$ (with high probability), thus increasing the chances of the best policy (whose confidence intervals are correct) to be selected. We also note that
$H^+$ appears as a constant in the regret through $\log_2(f^{-1}(H^+))$ and this suggests that the optimal choice of $f$ is $f(T) = \log(T)$, which would lead to a bound of order (up to constants and logarithmic terms) $\widetilde O(\sqrt{Tm} + m)$.
\iffalse
\textbf{Remark [Dependency on $\mathcal{S}$, $\mathcal A$, and $m$].} The first remarkable advantage of RLPA over UCRL is that its performance does not depend on the size of the state and action spaces (the regret of UCRL
scales as $O(S\sqrt{AT})$) of the MDP, while it has a sub-linear dependency $O(\sqrt{m})$ on the size of the policy space $\Pi$.\footnote{Whenever $T>O(m)$, the first term in the bound is dominant.} This advantage is obtained by exploiting the prior information that $\Pi$ contains good policies, which allows the algorithm to focus on testing their performance to identify the best, instead of building an estimate of the current MDP over the whole state-action space as in UCRL. It is also interesting to compare this result to other methods using some form of prior knowledge. In~\cite{maillard2013optimal} the objective is to learn the optimal policy along with a state representation which satisfies the Markov property. The algorithm receives as input a set of possible state representation models and under the assumption that one of them is Markovian, the algorithm is shown to have a sub-linear regret. Nonetheless, the algorithm inherits the regret of UCRL itself and still displays a $O(S\sqrt{A})$ dependency on states and actions. In~\cite{Dyagilev2008EWRL} the Parameter Elimination (PEL) algorithm is provided with a set of MDPs. The algorithm is analyzed in the PAC-MDP framework and under the assumption that the true model actually belongs to the set of MDPs. In particular, the performance of PEL does not depend on the size of the state-action space and it has a linear dependency on the number of MDPs $m$. In our setting, although no model is provided and the optimal policy $\pi^*$ does not necessarily belong to $\Pi$, the regret of RLPA scales with a better sub-linear dependency $\sqrt{m}$.
\fi
\subsection{Gap-Dependent Bound}
Similar to~\cite{UCRLAuer}, we can derive an alternative bound for RLPA where the dependency on $T$ becomes logarithmic and the gap between the average of the best and second best policies appears. We first need to introduce two assumptions.
\begin{asm}[Average Reward]
\label{asm:avg.reward}
Each policy $\pi\in\Pi$ induces on the MDP $M$ a single recurrent class with some additional transient states, i.e., $\mu^{\pi}(s)=\mu^{\pi}$ for all $s\in\mathcal{S}$. This implies that $H^\pi = sp(\lambda^\pi) < +\infty$.
\end{asm}
\begin{asm}[Minimum Gap]
\label{asm:gap}
Define the gap between the average reward of the best policy $\pi^+$ and the average reward of any other policy as $\Gamma(\pi,s)=\mu^{+}-\mu^{\pi}(s)$ for all $s\in\mathcal{S}$. We then assume that for all $\pi\in \Pi-\{\pi^{+}\}$ and $s\in\mathcal{S}$, $\Gamma(\pi,s)$ is uniformly bounded from below by a positive constant $\Gamma_{\min}>0$
\end{asm}
\begin{theorem}[Gap Dependent Bounds]
\label{thm:gap.dependent}
Let Assumptions \ref{asm:avg.reward} and \ref{asm:gap} hold. Run Algo.~\ref{alg:B.MDP} with the choice of $\delta=\sqrt[3]{ 1/T}$ (the stopping time $T$ is assumed to be known here). Assume that for all $\pi\in \Pi$ we have that $H_{\pi}\leq H_{\max}$. Then the expected regret of Algo.~\ref{alg:B.MDP}, after $T\geq T^+ = f^{-1}(H^{+})$ steps, is bounded as
\begin{align}\label{eq:gap.dependent}
\mathbb E(\Delta(s))= O\bigg( m \frac{(f(T)+H_{\max})(\log_2(mT)+\log_2(T^+))}{\Gamma_{\min}} \bigg),
\end{align}
for any initial state $s\in\mathcal{S}$.
\end{theorem}
\begin{small}
\begin{proof} \textbf{(sketch)}
Unlike for the proof of Thm.~\ref{thm:gap.independent}, here we need a more refined control on the number of steps of each policy as a function of the gaps $\Gamma(\pi,s)$. We first notice that Assumption~\ref{asm:avg.reward} allows us to define $\Gamma(\pi) = \Gamma(\pi,s) = \mu^+ - \mu^\pi$ for any state $s\in\mathcal{S}$ and any policy $\pi\in\Pi$. We consider the high-probability event $\mathcal E=\{\forall t=f^{-1}(H^+),\dots,T,\pi^{+} \in \Pi_{A}\}$ (see Lem.~\ref{lem:inc.pi}) where for all the trials run after $f^{-1}(H^+)$ steps never discard policy $\pi^+$. We focus on the episode at time $t$, when an optimistic policy $\widetilde{\pi}\neq\pi^+$ is selected for the $k(\pi)$-th time, and we denote by $n_k(\widetilde{\pi})$ the number of steps of $\widetilde{\pi}$ before episode $k$ and $v_k(\pi)$ the number of steps during episode $k(\pi)$. The cumulative reward during episode $k$ is $R_k(\widetilde{\pi})$ obtained as the sum of $\widehat{\mu}_k(\widetilde{\pi})n_k(\widetilde{\pi})$ (the previous cumulative reward) and the sum of $v_k(\widetilde{\pi})$ rewards received since the beginning of the episode. Let $\mathcal E=\{\forall t=f^{-1}(H^+),\dots,T,\pi^+ \in \Pi_{A} \enspace \& \enspace \widetilde{\pi} \geq \mu^+ \}$ be the event
under which $\pi^+$ is never removed from the set of
policies $\Pi_A$, and where the upper bound of the
optimistic policy $\widetilde{\mu}$, $B(\widetilde{\pi})$, is always as large
as the true average reward of the best policy $\mu^+$. On event $\mathcal E$ we hav
\begin{align*}
&3(\widehat{H}+1)\sqrt{48\frac {\log(t/\delta)}{n_k(\widetilde{\pi})}}+3\frac{k(\pi)}{n_k(\widetilde{\pi})} \overset{(1)}{\geq} B(\widetilde{\pi})-\frac{R_k(\widetilde{\pi})}{n_k(\widetilde{\pi})+v_k(\widetilde{\pi})} \\
\overset{(2)}{\geq} &\mu^+ - \frac{R_k(\widetilde{\pi})}{n_k(\widetilde{\pi})+v_k(\widetilde{\pi})}\geq \mu^+ - \mu^{\widetilde{\pi}} + \frac{1}{n_k(\widetilde{\pi})+v_k(\widetilde{\pi})} \sum_{t=1}^{n_k(\widetilde{\pi})+v_k(\widetilde{\pi})}(\mu^{\widetilde{\pi}}-r_{t})
\\
\overset{(3)}{\geq}&\Gamma_{\min}+\frac{1}{n_k(\widetilde{\pi})+v_k(\widetilde{\pi})} \sum_{t=1}^{n_k(\widetilde{\pi})+v_k(\widetilde{\pi})}(\mu^{\widetilde{\pi}}-r_{t})\overset{(4)}{\geq}\Gamma_{\min}- H^{\widetilde \pi} \sqrt{48\frac{\log (t/\delta)}{n_k(\widetilde{\pi})}}-H^{\widetilde \pi}\frac{K(\widetilde\pi)}{n_k(\widetilde{\pi})},
\end{align*}
with probability $1-(\delta/t)^6$. Inequality $(1)$ is enforced by the episode stopping condition on Line12 and the definition of $B(\pi)$, $(2)$ is guaranteed by Lem.~\ref{lem:optimism}, $(3)$ relies on the definition of gap and Assumption~\ref{asm:gap}, while $(4)$ is a direct application of Lem.~\ref{lem:conc:Markov}. Rearranging the terms, and applying Lem.~\ref{lem:episodes}, we obtain
\begin{equation*}
n_k(\widetilde{\pi}) \Gamma_{\min}\leq (3\widehat{H}+3+H^{\widetilde \pi}) \sqrt{n(\widetilde\pi)} \sqrt{48\log(t/\delta)}+4H^{\widetilde \pi}(2\log_2(t)+\log_2( f^{-1}(H^{+}) )).
\end{equation*}
By solving the inequality w.r.t. $n_k(\widetilde{\pi})$ we obtain
\begin{equation}
\label{eq:n.bound}
\sqrt{n(\widetilde\pi)}\leq \frac{(3\widehat H+3+ H^{\widetilde \pi} )\sqrt{48\log(t/\delta)}+2\sqrt{H^{\widetilde \pi}\Gamma_{\min}(2\log_2(t)+\log_2( f^{-1}(H^{+}))}}{\Gamma_{\min}},
\end{equation}
w.p. $1-(\delta/t)^6$. This implies that on the event $\mathcal E$, after $t$ steps, RLPA acted according to a suboptimal policy $\pi$ for no more than $O(\log(t)/\Gamma_{\min}^2)$ steps. The rest of the proof follows similar steps as in Thm.~\ref{thm:gap.independent} to bound the regret of all the suboptimal policies in high probability. The expected regret of $\pi^+$ is bounded by $H^+$ and standard arguments similar to~\cite{UCRLAuer} are used to move from high-probability to expectation bounds. \qed
\end{proof}
\end{small}
Note that although the bound in Thm.~\ref{thm:gap.independent} is stated in high-probability, it is easy to turn it into a bound in expectation with almost identical dependencies on the main characteristics of the problem and compare it to the bound of Thm.~\ref{thm:gap.dependent}.
The major difference is that the bound in Eq.~\ref{eq:gap.dependent} shows a $O(\log(T)/\Gamma_{\min})$ dependency on $T$ instead of $O(\sqrt{T})$. This suggests that whenever there is a big margin between the best policy and the other policies in $\Pi$, the algorithm is able to accordingly reduce the number of times suboptimal policies are selected, thus achieving a better dependency on $T$. On the other hand, the bound also shows that whenever the policies in $\Pi$ are very similar, it might take a long time to the algorithm before finding the best policy, although the regret cannot be larger than $O(\sqrt{T})$ as shown in Thm.~\ref{thm:gap.independent}.
We also note that while Assumption~\ref{asm:gap} is needed to allow the algorithm to ``discard'' suboptimal policies with only a logarithmic number of steps, Assumption~\ref{asm:avg.reward} is more technical and can be relaxed. It is possible to instead only require that each policy $\pi\in\Pi$ has a bounded span, $H^\pi < \infty$, which is a milder condition than requiring a constant average reward over states (i.e., $\mu^\pi(s) = \mu^\pi$).
\section{Computational Complexity}\label{s:computation}
As shown in Algo.~\ref{alg:B.MDP}, RLPA runs over multiple trials and
episodes where policies are selected and run. The
largest computational cost in RLPA is at the
start of each episode computing the $B$-values
for all the policies currently active in $\Pi_A$ and
then selecting the most optimistic one. This is an
$O(m)$ operation. The total number of episodes can be
upper bounded by $2\log_2(T)+\log_2(f^{-1} (H^+))$ (see Lem.~\ref{lem:episodes}). This means the overall computational of RLPA is
of $O(m (\log_2(T)+\log_2(f^{-1} (H^+))))$.
Note there is no explicit dependence on the size of the state
and action space. In contrast, UCRL2 has a similar
number of trials, but requires solving extended
value iteration to compute the optimistic MDP policy.
Extended value iteration requires $O(|S|^2 |A| \log (|S|))$
computation per iteration: if $D$ are the number of
iterations required to complete extended value iteration,
then the resulting cost would be $O(D|S|^2 |A| \log (|S|)$.
Therefore UCRL2, like many generic
RL approaches, will suffer a computational complexity
that scales quadratically with the number of states,
in contrast to RLPA, which depends linearly on the
number of input policies and is independent of the
size of the state and action space.
\section{Experiments}
In this section we provide some preliminary empirical evidence of the benefit
of our proposed approach. We compare our approach with two other baselines.
As mentioned previously,
UCRL2~\cite{UCRLAuer} is a well known algorithm for generic RL problems
that enjoys strong theoretical guarantees in terms of high probability regret bounds with the optimal rate of $O(\sqrt{T})$. Unlike our approach,
UCRL2 does not make use of any policy advice, and its regret
scales with the number of states and actions as $O(|\mathcal{S}|\sqrt{|\mathcal A|})$. To provide a more
fair comparison, we also introduce a natural variant of UCRL2,
Upper Confidence with Models (UCWM), which takes as input a
set of MDP models $\mathcal{M}$ which is assumed to contain the actual model $M$.
Like UCRL2, UCWM computes confidence intervals over the
task's model parameters, but then selects the optimistic
policy among the optimal policies for the subset of models
in $\mathcal{M}$ consistent with the confidence interval.
This may result in significantly tighter upper-bound on the optimal value function compared to UCRL2,
and may also accelerate the learning process.
If the size of possible models shrinks to one, then UCWM will seamlessly
transition to following the optimal policy for the identified model.
UCWM requires as input a set of MDP models, whereas our RLPA
approach requires only input policies.
We consider a square grid world with
$4$ actions: up ($a_1$), down ($a_2$), right ($a_3$) and left ($a_4$) for every state. A \emph{good} action succeeds with the probability $0.85$, and goes in one of the other directions with probability 0.05 (unless that would cause it to go into a wall) and
a \emph{bad} action stays in the same place with probability $0.85$ and goes in one of the $4$ other directions with probability $0.0375$.
We construct four variants of this grid world
$\mathcal M=\{M_1,M_2,M_3,M_4\}$.
In model $1$ ($M_1$) good actions are $1$ and $4$,
in model $2$ ($M_2$) good actions are $1$ and $2$,
in model $3$ good actions are $2$ and $3$,
and in model $4$ good actions are $3$ and $4$.
All other actions in each MDP are bad actions.
The reward in all MDPs is the same and is $-1$ for all
states except for the four corners which are: 0.7 (upper left),
0.8 (upper right), 0.9 (lower left) and 0.99 (lower right).
UCWM receives as input the MDP models and
RLPA receives as input the optimal policies of $\mathcal{M}$.
We evaluate the performances of each algorithm in terms of
the per-step regret,
$\hat\Delta=\Delta/T$ (see Eq.~\ref{eq:regret}). Each run is $T=100000$
steps and we average the performance on $100$ runs. The agent is randomly placed at one of the states of the grid at the beginning of each round. We assume that the true MDP model is $M_4$. Notice that in this case $\pi^*\in\Pi$, thus $\mu^+=\mu^*$ and the regret compares to the optimal average reward. The identity of the true MDP is not known by the agent. For RLPA we set $f(t)=\log(t)$.\footnote{See Sec. \ref{ss:gap.independent} for the rational behind this choice.} We construct grid worlds of various
sizes and compare the resulting performance of the three algorithms.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.8\textwidth]{reg_state.eps}
\end{center}
\vspace{-.3in}
\caption{Per-step regret versus number of states.}
\vspace{-0.1in}
\label{fig:grid.perstate}
\end{figure}
\begin{figure}[t]
\begin{centering}
\subfigure[Avg. per-step regret vs time step.]{\label{fig:grid.perstep}
\includegraphics[width=0.5\textwidth]{reg_temp.eps}}
\subfigure[Running time versus $|S|$.]{\label{fig:grid.comp}
\includegraphics[width=0.5\textwidth]{comp_state.eps}}
\end{centering}
\vspace{-0.25in}
\end{figure}
Fig.~\ref{fig:grid.perstate} shows per-step regret of the algorithms as the function of the number of states.
As predicted by the theoretical bounds, the per-step
regret $\widehat\Delta$ of UCRL2
significantly increases as the number of states increases,
whereas the average regret of our RLPA is essentially
independent of the state space size\footnote{
The RLPA regret bounds depend on the bias of the
optimal policy which may be indirectly a function
of the structure and size of the domain.}.
Although UCWM has a lower regret than RLPA
for a small number of states, it quickly loses its advantage
as the number of states grows. UCRL2's per-step regret
plateaus after a small number of states since it is
effectively reaching the maximum possible regret given
the available time horizon.
To demonstrate the performance of each approach for a
single task, Fig.~\ref{fig:grid.perstep} shows
how the per-step regret changes with different time horizons for a grid-world
with $64$ states.
RLPA demonstrates a superior regret throughout the run with a decrease
that is faster than both UCRL and UCWM.
The slight periodic increases in regret of RLPA are when a new trial is
started, and all policies are again considered.
We also note that the slow rate of decrease for all three algorithms
is due to confidence intervals dimensioned according to the theoretical results which are often over-conservative, since
they are designed to hold in the worst-case scenarios.
Finally, Fig.~\ref{fig:grid.comp} shows the average running time of one trial of the algorithm as a function of the number of states. As
expected, RLPA's running time is independent of the
size of the state space,
whereas the running time of the other algorithms increases.
Though a simple domain, these empirical results support our earlier
analysis, demonstrating RLPA exhibits a regret and computational
performance that is essentially independent of the size of the
domain state space. This is a significant advantage over UCRL2, as we might expect because RLPA can efficiently leverage input policy advice. Interestingly, we obtain a significant improvement also over the more competitive baseline UCWM.
\section{Related Work}
The setting we consider relates to the multi-armed bandit literature,
where an agent seeks to optimize its reward by uncovering the arm
with the best expected reward. More specifically, our setting
relates to restless~\cite{ortnerALT2012} and rested~\cite{tekinIEEE2012}
bandits, where each arm's distribution is generated by a an
(unknown) Markov chain that either transitions at every step,
or only when the arm is pulled, respectively. Unlike
either restless or rested bandits, in our case each ``arm'' is
itself a MDP policy, where different actions may be chosen.
However, the most significant distinction may be that in our
setting there is a independent state that couples the rewards
obtained across the policies (the selected action depends on
both the policy/arm selected, and the state),
in contrast to the rested and restless
bandits where the Markov chains of each arm evolve independently.
Prior research has demonstrated a significant improvement
in learning in a discrete state and action
RL task whose Markov decision process
model parameters are constrained to lie in a finite set.
In this case, an objective of maximizing the expected
sum of rewards can be framed as planning in a finite-state
partially observable Markov decision process~\cite{PoupartICML2006}: if
the parameter set is not too large, off-the-shelf POMDP
planners can be used to yield
significant performance improvements over state-of-the-art
RL approaches~\cite{BrunskillAAMAS2012}. Other work~\cite{Dyagilev2008EWRL}
on this setting has proved that the sample complexity of
learning to act well scales independently of the size
of the state and action space, and linearly with the
size of the parameter set. These approaches focus on
leveraging information about the model space in the
context of Bayesian RL or PAC-style RL, in contrast
to our model-free approach that focuses on regret.
There also exists a wealth of literature on learning
with expert advice (e.g.~\cite{CesaACM1997}). The majority
of this work lies in supervised learning. Prior work
by Diuk et al.~\cite{DiukICML2009} leverages a set
of experts where each expert predicts a probabilistic
concept (such as a state transition) to provide
particularly efficient KWIK RL. In contrast, our
approach leverages input policies, rather than models.
Probabilistic policy reuse~\cite{FernandezAAMAS2006}
also adaptively selects among a prior set of provided policies,
but may also choose to create and follow a new policy.
The authors present promising empirical results but
no theoretical guarantees are provided. However,
we will further discuss this interesting issue
more in the future work section.
The most closely related work is by Talvitie and
Singh~\cite{talvitieIJCAI2007}, who also consider identifying
the best policy from a set of input provided policies.
Talvitie and Singh's approach is a special case of a
more general framework for leveraging experts in
sequential decision making environments where the
outcomes can depend on the full history of states
and actions~\cite{pucciNIPS2004}: however, this more
general setting provides bounds in terms of an
abstract quantity, whereas Talvitie and Singh provide
bounds in terms of the bounds on mixing times of a MDP.
There are several similarities between our
algorithm and the work of Talvitie and Singh,
though in contrast to their approach
we take an optimism under uncertainty approach, leveraging
confidence bounds over the potential average reward of each
policy in the current task. However, the provided bound in their
paper is not a regret bound and no precise expression
on the bound is stated, rendering it infeasible to do
a careful comparison of the theoretical bounds. In contrast,
we provide a much more rigorous theoretical analysis,
and do so for a more general setting (for example,
our results do not require the MDP to be ergodic).
Their algorithm also involves several parameters whose values
must be correctly set for the bounds to hold, but precise
expressions for these parameters were not provided, making
it hard to perform an empirical comparison
\section{Future Work and Conclusion}
\label{s:conclusions}
In defining RLPA we preferred to provide a simple
algorithm which allowed us to provide a rigorous theoretical analysis.
Nonetheless, we expect the current version of the algorithm
can be easily improved over multiple dimensions. The immediate
possibility is to perform off-policy learning across
the policies: whenever a reward information is received for a particular
state and action, this could be used to
update the average reward estimate $\widehat{\mu}(\pi)$ for all policies
that would have suggested the same action for the given state.
As it has been shown in other scenarios, we expect this could
improve the empirical performance of RLPA. However,
the implications for the theoretical results are less clear.
Indeed, updating the estimate $\widehat{\mu}(\pi)$ of a policy $\pi$
whenever a ``compatible'' reward is observed would correspond to a significant increase in the number of episodes $K(\pi)$ (see Eq.~\ref{eq:empirical.average}). As a result, the convergence rate of $\widehat{\mu}(\pi)$ might get worse and
could potentially degrade up to the point when $\widehat{\mu}(\pi)$ does
not even converge to the actual average reward $\mu^\pi$. (see Lem.~\ref{lem:conc:Markov} when $K(\pi) \simeq n(\pi)$).
We intend to further
investigate this in the future.
Another very interesting direction of future work is to
extend RLPA to leverage policy advice when useful, but
still maintain generic RL guarantees if the input policy
space is a poor fit to the current problem. More concretely, currently
if
$\pi^+$ is not the actual optimal policy of the MDP, RLPA suffers an additional linear regret to the optimal policy of order $T(\mu^*-\mu^+)$.
If $T$ is very large and $\pi^+$ is highly suboptimal, the total regret of RLPA may be worse than UCRL, which always eventually learns the optimal policy. This opens the question whether it is possible to design an algorithm able to take advantage of the small regret-to-best of RLPA when $T$ is small and $\pi^+$ is nearly optimal and the guarantees of UCRL for the regret-to-optimal.
To conclude, we have presented RLPA, a new RL algorithm
that leverages an input set of policies. We prove the regret
of RLPA relative to the best policy scales sub-linearly with
the time horizon, and that both this regret and the
computational complexity of RLPA are independent of
the size of the state and action space. This suggests that
RLPA may offer significant advantages in large domains
where some prior \emph{good} policies are available.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,575
|
{"url":"http:\/\/maxspywareremover.com\/standard-error\/what-is-the-standard-error-of-measurement.php","text":"Home > Standard Error > What Is The Standard Error Of Measurement\n\nWhat Is The Standard Error Of Measurement\n\nContents\n\nThe MRCP(UK) Part 1 and Part 2 Written Examinations are criterion-referenced, single-version, machine-marked papers. In effect, the candidates taking the Part 2 examination are similar to the candidates who passed the examination that we have simulated, and then went on to retake it. It is clear that the black dots correspond to the same broad area of the scattergram as they did in figure 1a. It should however be emphasised that there is a standard correction for restriction of range which cannot also be applied. http:\/\/maxspywareremover.com\/standard-error\/what-is-a-standard-error-of-measurement.php\n\nThe confidence interval of 18 to 22 is a quantitative measure of the uncertainty \u2013 the possible difference between the true average effect of the drug and the estimate of 20mg\/dL. The observed score and its associated SEM can be used to construct a \u201cconfidence interval\u201d to any desired degree of certainty. Data were analysed using SPSS version 13.0. What is clear is that there are good statistical reasons why reliability will be lower when there is a narrower ability range in the candidates, and that in all of these my site\n\nStandard Error Of Measurement Example\n\nAs the sample size increases, the sampling distribution become more narrow, and the standard error decreases. Similarly, the sample standard deviation will very rarely be equal to the population standard deviation. about 90 questions per paper), with the exam held over two successive days.\n\nIn fact, data organizations often set reliability standards that their data must reach before publication. For example, a range of \u00b1 1 SEM around the observed score (which, in the case above, was a range from 185 to 191) is the range within which there is That change was driven in part by a concern that the reliability of the examination needed to be raised; and indeed, there was an increase in the reliability of the examination Standard Error Of Measurement Spss The horizontal axis shows the mark on the first occasion, and the vertical axis the mark on the second occasion.\n\nThe reliability can be artificially inflated by encouraging very weak candidates to take it, thereby increasing the SD of the marks; iii. Standard Error Of Measurement Calculator It would be expected, merely because of restriction of the ability range (and ignoring any changes in skills or abilities being assessed), that the reliability will be less in the Part Cargando... Medical Education. 2002, 36: 73-91. 10.1046\/j.1365-2923.2002.01120.x.View ArticleGoogle ScholarMcManus IC, Mooney-Somers J, Dacre JE, Vale JA: Reliability of the MRCP(UK) Part I Examination, 1984-2001.\n\nThe Part 2 papers are mostly Best-of-Five questions, with two or three >Several-from-Many (questions in each diet. Standard Error Of Measurement Interpretation The MRCP(UK) examinations and Specialty Certificate Examinations The MRCP(UK) is a three-part examination that provides summative assessment of knowledge requirements and clinical skills necessary for trainee physicians before undertaking higher training Using formula 10-11 on p.298 of Ghiselli et al [9], then with an unrestricted correlation of 0.9 and an unrestricted standard deviation of 10, then the effect of reducing the standard Please join the conversation on the NWEA Twitter and Facebook channels!\n\nStandard Error Of Measurement Calculator\n\nin Counselor Education from the University of Arkansas, an M.A. http:\/\/web.cortland.edu\/andersmd\/STATS\/sem.html While calculating the Standard Error of Measurement, should we use the Lower and Upper bounds or continue using the Reliability estimate. Standard Error Of Measurement Example BMC Medical Education 2010, 10:40 Although it might seem to barely address your question at first sight, it has some additional material showing how to compute SEM (here with Cronbach's $\\alpha$, Standard Error Of Measurement Reliability The Monte Carlo analysis carried out here has primarily been used for demonstrative purposes.\n\nIn effect, therefore, the SEM can be seen as a fundamental property of the ruler itself, rather than of a ruler in relation to the heights of the people who are my review here Clinical Teacher. 2009, 6: 164-166. 10.1111\/j.1743-498X.2009.00293.x.View ArticleGoogle ScholarPre-publication historyThe pre-publication history for this paper can be accessed here:http:\/\/www.biomedcentral.com\/1472-6920\/10\/40\/prepub Copyright\u00a9Tighe et al; licensee BioMed Central Ltd.2010 This article is published under license This often leads to confusion about their interchangeability. A key point is now apparent, one that is well recognised in the assessment literature: reliability is not a property of an assessment, but a joint property of an assessment and Standard Error Of Measurement And Confidence Interval\n\nthat the test is measuring what is intended, and that you would getapproximately the same score if you took a different version. (Moststandardized tests have high reliability coefficients (between 0.9 and Edwards Deming. And to do this, the assessment must measure all kids with similar precision, whether they are on, above, or below grade level. http:\/\/maxspywareremover.com\/standard-error\/what-is-standard-error-of-measurement-used-for.php Sampling from a distribution with a large standard deviation The first data set consists of the ages of 9,732 women who completed the 2012 Cherry Blossom run, a 10-mile race held\n\nAcci\u00f3n en curso... Standard Error Of Measurement Excel Can I Plan for It?Empower Students with the College Explorer ToolMeasuring Growth and Understanding Negative Growth Is your district implementing Smarter Balanced? Think about the following situation.\n\nEven if that Part 2 assessment has the same measurement characteristics as the Part 1, it will necessarily have a lower reliability than the Part 1.\n\nFor illustration, the graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16. Anuncio Reproducci\u00f3n autom\u00e1tica Si la reproducci\u00f3n autom\u00e1tica est\u00e1 habilitada, se reproducir\u00e1 autom\u00e1ticamente un v\u00eddeo a continuaci\u00f3n. What is apparent from this figure is that test scores for low- and high-achieving students show a tremendous amount of imprecision. Standard Error Of Measurement Vs Standard Deviation Analysis was as for the Part 1 and Part 2 examinations of MRCP(UK).\n\nWikipedia\u00ae is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Methods a) The interrelationships of standard deviation (SD), SEM and reliability were investigated in a Monte Carlo simulation of 10,000 candidates taking a postgraduate examination. Next, consider all possible samples of 16 runners from the population of 9,732 runners. navigate to this website Categor\u00eda Formaci\u00f3n Licencia Licencia de YouTube est\u00e1ndar Mostrar m\u00e1s Mostrar menos Cargando...\n\nFor an upcoming national election, 2000 voters are chosen at random and asked if they will vote for candidate A or candidate B. If \u03c3 is not known, the standard error is estimated using the formula s x \u00af \u00a0 = s n {\\displaystyle {\\text{s}}_{\\bar {x}}\\ ={\\frac {s}{\\sqrt {n}}}} where s is the sample Related Posts How many students and schools actually make a year and a half of growth during a year?NWEA Researchers at AERA & NCME 2016Reading Stamina: What is it? SEM, put in simple terms, is a measure of precision of the assessment\u2014the smaller the SEM, the more precise the measurement capacity of the instrument.\n\nHowever the alpha coefficient depends both on SEM and on the ability range (standard deviation, SD) of candidates taking an exam. American Statistical Association. 25 (4): 30\u201332. JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles and Practice of Statistics in Biological Research , 2nd ed.","date":"2018-01-21 02:53:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.48897919058799744, \"perplexity\": 1741.041937836811}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084889917.49\/warc\/CC-MAIN-20180121021136-20180121041136-00364.warc.gz\"}"}
| null | null |
\section{Introduction}
Blazars are the most extreme active galactic nuclei (AGNs). Their broadband emissions, from radio through $\gamma$-ray, are dominated by nonthermal emissions produced by relativistic plasma jet aligned the line of sight \citep{1978bllo.conf..328B}. Their spectra energy distribution (SED) show two broad components in $\log\nu-\log\nu L_{\nu}$ diagram. The lower component peaks at infrared (IR) to X-ray bands, which is believed to be the synchrotron emissions of relativistic electrons within jet. The higher component peaks at $\gamma$-ray band, which is thought to be the inverse Compton (IC) emissions of the same electron population. Models are classified according to different origins of the IC seed photons, synchrotron-self Compton \citep[SSC, seed photons from the synchrotron radiation, see][]{1981ApJ...243..700K, 1985ApJ...298..114M, 1989ApJ...340..181G, 1992ApJ...397L...5M, 1985ApJ...298..128B} and external Compton \citep[EC, seed photons from external region, see][]{1992A&A...256L..27D, 1993ApJ...416..458D, 1995ApJ...441...79B, 1994ApJ...421..153S, 2000ApJ...545..107B, 2002ApJ...577...78S}. Blazars are often divided into two subclasses of BL Lacertae objects (BL Lacs) and flat spectrum radio quasars (FSRQs). FSRQs have strong emission lines, while BL Lacs have only very weak or lack the emission lines \citep[equivalent width $<5${\AA}, e.g.,][]{1997A&A...325..109S}.
\citet{1998MNRAS.299..433F} presented a unifying view of the SEDs of blazars, in which both the synchrotron peak luminosity (hereafter $L_{s}\equiv(\nu L_{\nu})_{s}^{p}$) and the Compton dominance (the ratio between Compton and synchrotron luminosities, $CD\equiv L_{C}/L_{s}$) decrease with increasing the synchrotron peak frequency (hereafter $\nu_{s}$). \citet{1998MNRAS.301..451G} modeled the broadband SEDs of 51 $\gamma$-ray loud blazars, and showed that in powerful blazars the radiative energy density is large. The effective IC cooling yields lower electron energy and larger $CD$. The lower energy electron emits at lower frequency. An inverse correlation between $\gamma_{p}$ and $U_{tot}'$ is further derived. Where $\gamma_{p}$ is the electron energy emitting at the synchrotron peak, and $U_{tot}'$ is the summation of the magnetic and radiative energy densities within the Thomson regime. In the following works \citep[e.g.,][]{2002A&A...386..833G, 2009MNRAS.399.2041G, 2010MNRAS.402..497G, 2008MNRAS.385..283C}, the $\gamma_{p}$-$U_{tot}'$ inverse correlation is confirmed. People often call $\nu_{s}-L_{s}$ and/or $\gamma_{p}-U_{tot}'$ the blazar sequence. Large number blazars are detected by \emph{Fermi}/LAT, which are compiled as the LAT Bright AGN Sample \citep[LBAS,][]{2009ApJ...700..597A} and the First LAT AGN Catalog \citep[1LAC,][]{2010ApJ...715..429A}. Both LBAS and 1LAC show the correlations between the $\gamma$-ray luminosity ($L_{\gamma}$) and photon indices ($\Gamma_{\gamma}$). The photon indices correlate with peak frequencies and the $\gamma$-ray luminosity can represent the peak luminosity roughly \citep[see,][]{2010ApJ...715..429A, 2009arXiv0912.2040A}. Therefore, it seems to support the balzar sequence.
Many contrary arguments are also reported \citep{2001ASPC..227..116G, 2004MNRAS.348..937C, 2005MNRAS.356..225A, 2006A&A...445..441N, 2007Ap&SS.309...63P}. They mainly focus on three points. Firstly, many low peak frequency low power blazars are found. This causes no significant correlation between $\log\nu_{s}$ and $\log L_{s}$. Secondly, several high peak frequency FSRQs are reported, in contrast with the correlation mentioned above. The SED properties of these sources are mainly determined from composite spectral indices\footnote{The composite spectra index, $\alpha_{12}$, is usually used to measure the overall trend of the broadband spectra when lacks more detailed spectra information. It is defined as $f_{\nu_{1}}/f_{\nu_{2}}=(\nu_{1}/\nu_{2})^{-\alpha_{12}}$, where $f_{\nu_{1,2}}$ are the flux densities at frequencies $\nu_{1,2}$ \citep{1985ApJ...298..630L}.} rather than from broad band SEDs. It causes the uncertainties of the result \citep[see][]{2007Ap&SS.309...63P}. \citet{2008IJMPD..17.1457M} re-studied these FSRQs and found that they do follow the $\log\nu_{s}-\log L_{s}$ sequence. Thirdly, the blazar sequence predicts that blazars with higher peak frequency (mainly BL Lacs) should be more numerous than blazars with lower peak frequency. However, this prediction has not been proved. As indicated by \citet{2008MNRAS.387.1669G}, the reason may be that the samples considered are flux limited, introducing a bias against low luminosity/high peak frequency blazars. \emph{Fermi}/LAT sensitivity is better than that of \emph{EGRET}, especially for harder spectra \citep{2009ApJ...700..597A, 2010ApJ...715..429A}. Very recently, an interesting finding is that the fraction of the BL Lacs in $\gamma$-ray blazars increases from \emph{EGRET} to \emph{Fermi}/LAT \citep[see][]{1999ApJS..123...79H, 2009ApJ...700..597A, 2010ApJ...715..429A}. In 1LAC \citep{2010ApJ...715..429A}, the number of BL Lacs is even larger than the number of FSRQs.
The first objection mentioned above is the strongest evidence against the blazar sequence. \citet{2008MNRAS.387.1669G} presented that there are two possibilities account for it. The first explanation is that those low $\nu_{s}$ low $L_{s}$ sources may be misaligned. The weak beaming effect would shift blazars to low peak frequency low observed luminosity. The second explanation is that sources with low luminosity and low $\nu_{s}$ may be associated with black holes of smaller mass. The jets of these sources will dissipate energy within the broad line region (BLR). The electrons then cool efficiently, and emit at low frequency \citep{2008MNRAS.387.1669G}.
The blazar sequence constrains our understanding on jet physics. It relates to jet energy dissipation, particle acceleration, the emission region properties and environments, etc. In this paper, we collect the black hole masses and use the quasi-simultaneous broadband SEDs of \emph{Fermi} bright blazars \citep{2009ApJ...700..597A, 2009arXiv0912.2040A} and the SEDs of four \emph{Fermi} detected narrow line Seyfert 1 \citep[NLS1,][]{2009ApJ...707L.142A} to study the blazar sequence. In addition, we also study the EC/SSC models.
In section 2, we discuss the sample. Section 3 discusses the relations between our result and the blazar sequence. Section 4 discusses the inverse Compton (IC) models. We summarize and discuss our findings in Section 5. The cosmology with H$_{0}=70$ km s$^{-1}$Mpc$^{-1}$, $\Omega_{m}=0.3$ and $\Omega_{\Lambda}=0.7$ is adopted throughout the paper.
\section{The Sample}
The first three months operation of \emph{Fermi}-LAT reveals more than 100 blazars ($>10\sigma$), and named as the \emph{Fermi} LAT Bright AGN Sample \citep[LBAS,][]{2009ApJ...700..597A}. \citet{2009arXiv0912.2040A} presented quasi-simultaneous SEDs for 48 LBAS blazars, whose data were collected from radio through $\gamma$-ray within those three months operation. The IC and the synchrotron peak frequencies/fluxes are estimated by fitting the two components with a third degree polynomial of $\nu F_{\nu}=a\cdot\nu^{3}+b\cdot\nu^{2}+c\cdot\nu+d$. There are 43 of these 48 sources having measured redshifts. The peak luminosities and frequencies (in AGN frame) of these blazars can be calculated through $L_{s,C}=4\pi d_{L}^{2}\left(\nu f_{\nu}\right)_{s,C}^{p}$ and $\nu_{s,C}=\left(1+z\right)\nu_{s,C}^{p\_obs}$, where $d_{L}$ is the luminosity distance. The results are listed in table \ref{blazar}. Column (1) provides the LAT name of the source. Columns (2) and (3) indicate the synchrotron peak frequency and luminosity. Columns (4) and (5) denote the IC peak frequency and luminosity. The redshift, $\gamma$-ray photon indices $\Gamma_{\gamma}$, $\gamma$-ray luminosity $L_{\gamma}$ and the optical classification are listed in columns (6), (7), (8) and (9), respectively. Columns (10) and (11) are the black hole masses and the references. For columns (7), (8), (10) and (11) see below.
\section{Implications on the Blazar Sequence}
As discussed above, both LBAS and 1LAC show the correlations between $\gamma$-ray photon indices $\Gamma_{\gamma}$ and $\gamma$-ray luminosity $L_{\gamma}$. Because the spectra index correlates with the synchrotron peak frequency \citep[see e.g.,][]{2010ApJ...715..429A}, the correlation between $\Gamma_{\gamma}$ and $L_{\gamma}$ can be thought as evidence to support the blazar sequence (but see discussion below). Here we use the peak frequency directly to test the sequence.
Figure \ref{nu_lum} shows the correlation between the peak frequency ($\nu_{s}$) and luminosity ($L_{s}$). In which, squares are for those 43 sources (the opened circles are NLS1s, see below). It can be seen that the luminosity statistically decreases with increasing the peak frequency. The solid line presents the best fitting (excluding the NLS1s), which gives $L_{s}\propto\nu_{s}^{-0.44\pm0.11}$ and Pearson's $prob$-value (the significance level at which the null hypothesis of zero correlation is disproved) $p=2.06\times10^{-4}$. This is consistent with those studies using $\Gamma_{\gamma}$ and $L_{\gamma}$ \citep[e.g.,][]{2009MNRAS.396L.105G, 2009ApJ...700..597A, 2010ApJ...715..429A} and supports the blazar sequence. But it also can be seen (see figure \ref{nu_lum}), in addition to statistical inverse correlation, that there presents some sources with low $\nu_{s}$ and low $L_{s}$. This makes the $\log\nu_{s}-\log L_{s}$ plane more like wedge-shape. This result has been presented in previous studies, which yield less significant correlation between $\log\nu_{s}$ and $\log L_{s}$ and taken as opponent evidence to the blazar sequence \citep[e.g.,][]{2001ASPC..227..116G, 2004MNRAS.348..937C, 2005MNRAS.356..225A, 2006A&A...445..441N, 2007Ap&SS.309...63P}.
Additionally, we present the correlation between the Compton dominance ($CD$) and luminosity ($L_{s}$), which gives $p=0.00307$ (see figure \ref{lum_cd}). This result is consistent with another statement of the blazar sequence, which claims inverse correlation between luminosity and the Compton dominance. This is first time using quasi-simultaneous broadband data to confirm the statement. From figures \ref{nu_lum} and \ref{lum_cd}, it is expected that low $\nu_{s}$ low $L_{s}$ sources would have lower $CD$. We plot $\nu_{s}$ vs. $CD$ plane (figure not supplied here), which is also wedge-shape. \citet{2008MNRAS.387.1669G} suggested those low $\nu_{s}$ low $L_{s}$ blazars may be misaligned or have smaller black holes.
If those sources have relative larger viewing angles, they become lower luminosity and lower peak frequency. As we know, the Compton and synchrotron peak frequencies are dependent on the beaming effect with the same way. Therefore, the ratio between Compton to synchrotron peak frequencies $r_{Cs}\equiv\nu_{C}/\nu_{s}$ should be independent on viewing angle. And so does the Compton dominance $CD\equiv L_{C}/L_{s}$. Luminosity is proportional to $\delta^{4}$ and frequency is proportional to $\delta$, where $\delta\equiv1/\left\{\Gamma\left(1-\beta\cos\theta\right)\right\}$ is the beaming factor, $\Gamma=1/\left(1-\beta^{2}\right)$ is the Lorentz factor, $\beta\equiv\upsilon/c$ is the velocity in unit of lightspeed and $\theta$ is the viewing angle. Therefore it is expected that $r_{Cs}$ and $CD$ will be independent on the parameter $L_{s}\nu_{s}^{1/4}$ if the difference really relies on the beaming effect. Hence, we present the correlation between the parameter $L_{s}\nu_{s}^{1/4}$ and $r_{Cs}$ in figure \ref{beaming_mass_r}. Figure \ref{beaming_mass_cd} is the correlation between $L_{s}\nu_{s}^{1/4}$ and $CD$. From figure \ref{beaming_mass_r}, we can see that there is a blazar, 0FGL J1719.3+1746, having extreme ratio $r_{Cs}$ (the triangle at top left corner). From SED of 0FGL J1719.3+1746 \citep[see][]{2009arXiv0912.2040A}, we can see that the IC peak frequency is overestimated. Excluding 0FGL J1719.3+1746, the parameter $L_{s}\nu_{s}^{1/4}$ is correlated with the ratio $r_{Cs}$ although have large scattering ($p=0.0218$). Similar result is derived for $L_{s}\nu_{s}^{1/4}$ vs. $CD$ ($p=0.0286$, see figure \ref{beaming_mass_cd}). This do not support the idea that low $\nu_{s}$ low $L_{s}$ sources are misaligned.
As suggested by \citet{2008MNRAS.387.1669G}, those low $\nu_{s}$ low $L_{s}$ blazars may have smaller black holes \citep{2008MNRAS.387.1669G}, and the jet will dissipate energy within the BLR. This will cause efficient cooling of the electron, and yields low frequency low power \citep[see][]{2008MNRAS.387.1669G}. The low black hole mass also produces the lower Compton dominance \citep[see][]{2008MNRAS.387.1669G}. To check if the black hole masses account for those low $\nu_{s}$ low $L_{s}$ blazars, we collect black hole masses from previous works.
Many authors derived the black hole masses of balzars from different ways \citep[e.g.,][]{2010MNRAS.402..497G, 2002MNRAS.331..111C, 2009RAA.....9.1192C, 2010arXiv1011.5879D, 2003MNRAS.343..505F, 2004ApJ...602..103F, 2003MNRAS.340..632L, 2008MNRAS.385..119W, 2003ApJ...583..134B, 2003ApJ...595..624F, 2001MNRAS.327.1111G, 2006ApJ...637..669L, 2005MNRAS.361..919P, 2004ApJ...615L...9W, 2005ApJ...631..762W, 2002A&A...389..742W, 2004AJ....127...53X, 2005AJ....130.2506X}. Through all papers we know, we collect 30 black hole masses of these 43 blazars. Some blazars were studied by many authors and different hole masses are derived. To reduce the uncertainty, we try to select the hole masses from a unity paper and the uniform method deriving the hole mass.
The result is presented in table \ref{blazar}. Columns (10) and (11) are for black hole masses and the references.
The best fitting of figure \ref{nu_lum} shows $L_{s}\propto\nu_{s}^{-0.44\pm0.11}$. Therefore, the correlation between parameter $L_{s}\nu_{s}^{0.44}$ and hole masses could be used to check if these low $\nu_{s}$ low $L_{s}$ blazars have lower hole masses. Figure \ref{low_mass} presents the result, and the best fitting indicates $p=0.0344$. Despite the scattering, our result supports that low $\nu_{s}$ low $L_{s}$ blazars have smaller black hole \citep[see][]{2008MNRAS.387.1669G}. In order to find more evidences, we use broadband SEDs of 4 radio loud narrow line Seyfert 1 (NLS1) detected by \emph{Fermi}/LAT \citep{2009ApJ...707L.142A} to check the above result. NLS1 is thought to have smaller black hole \citep[e.g.,][and references therein]{2008ApJ...685..801Y}. These 4 radio loud NLS1s are believed to have similar central mechanisms as in blazars \citep[see][]{2009ApJ...699..976A, 2009ApJ...707..727A, 2009ApJ...707L.142A}. Therefore, if our above result is correct, these NLS1s should be in low $\nu_{s}$ and low $L_{s}$ region. We collect the broadband SEDs of these four NLS1s \citep[from \emph{NED}\footnote{http://nedwww.ipac.caltech.edu/} and][]{2009ApJ...707L.142A}. For simplicity, we use two-order polynomial to fit the synchrotron component in $\log\nu-\log\nu L_{\nu}$ diagram. The peak frequency and luminosity are presented in table \ref{NLS1}. We plot this in figure \ref{nu_lum}, which are shown as opened circles. It can be seen that these four sources do have low $\nu_{s}$ and low $L_{s}$ values. This supports our above result.
\section{Implications on Inverse Compton Models}
From discussion in above section, we know that both the ratio $r_{Cs}$ and the Compton dominance $CD$ correlate with the parameter $L_{s}\nu_{s}^{1/4}$. This indicates that $r_{Cs}$ and $CD$ may correlate with each other, although we do not know what the correlation implies. Figure \ref{EC3} shows the plane of $r_{Cs}$ vs. $CD$. The best fitting gives $p=0.00375$ (excluding the blazar 0FGL J1719.3+1746). This is a new result. We will discuss its implications on the emission models (i.e., SSC vs. EC). Of course, no matter what conclusion is derived, it works on statistics. After following discussion, it will be seen that the EC model can predict this correlation naturally, while SSC model can not.
Within the symmetrical sphere model, if an electron population emits the broadband SED of blazar, the synchrotron peak frequency ($\nu_{s}$) corresponds to a peak electron energy \citep[$\gamma_{p}$ in $\gamma-\gamma^{3}N_{\gamma}$ diagram,][]{1998ApJ...509..608T},
\begin{equation}\label{eq_syn_fre}
\nu_{s}=\frac{4}{3}\nu_{L}\gamma_{p}^{2}\delta,
\end{equation}
where $\nu_{L}=eB/(2\pi m_{e}c)$ is the Larmor frequency. If the external radiation is prominent at frequency $\nu_{ext}$, the EC component peaks at \citep[inverse Compton scatter within Thomson regime,][]{1970RvMP...42..237B, 1990MNRAS.245..453C, 1998ApJ...509..608T, 2008MNRAS.387.1669G},
\begin{equation}\label{eq_ec_fre}
\nu_{EC}^{p}=\frac{4}{3}\nu_{ext}\gamma_{p}^{2}\Gamma\delta,
\end{equation}
where $\Gamma$ is the jet Lorentz factor. If there is the EC dominant, the EC and synchrotron luminosities follow \citep{1996MNRAS.280...67G, 1998ApJ...509..608T, 2008MNRAS.387.1669G},
\begin{equation}\label{eq_syn_ec_lum}
\frac{L_{EC}}{L_{sy}}=\frac{U_{ext}'}{U_{B}}\simeq\frac{17}{12}\frac{\Gamma^{2}U_{ext}}{U_{B}},
\end{equation}
where $U_{ext}$ is energy density of external photons in the rest frame of the source, $U_{ext}'\simeq(17/12)\Gamma^{2}U_{ext}$ is that measured in the jet comoving frame, and $U_{B}\equiv B^{2}/8\pi$ is the magnetic field energy density.
Combining equations \ref{eq_syn_fre}-\ref{eq_syn_ec_lum} yields,
\begin{equation}\label{eq_cd_rcs}
\frac{L_{EC}}{L_{sy}}\simeq\frac{17e^{2}}{6\pi m_{e}^{2}c^{2}}\frac{U_{ext}}{\nu_{ext}^{2}}\left(\frac{\nu_{EC}^{p}}{\nu_{s}}\right)^{2}.
\end{equation}
Thus we expect $L_{EC}/L_{sy}\propto\left(\nu_{EC}^{p}/\nu_{s}\right)^{2}$ if the external radiation is constant.
For SSC, the IC emissions rely on the synchrotron emissions. Therefore, the simple relation between $CD$ and $r_{Cs}$ can not be derived.
As suggest by \citet{1998MNRAS.301..451G} \citep[see also][]{1999A&A...341...74H, 2006ApJ...646....8F, 2002A&A...386..833G, 2008MNRAS.385..283C}, the external photons of most blazars are contributed by BLR. And the BLR emissions can be almost uniformly taken as $U_{BLR}\simeq2.65\times10^{-2}{\rm erg\ cm}^{-3}$ and $\nu_{BLR}\simeq2\times10^{15}$Hz \citep[see][]{2008MNRAS.387.1669G}. In this case, $CD=L_{C}/L_{s}\simeq L_{EC}/L_{sy}$ correlates with $r_{Cs}$. The statistical correlation shown in figure \ref{EC3} between $CD$ and $r_{Cs}$ may suggest that most blazars are EC dominant. However this is only qualitative result, because the slope of the best fitting ($s\approx0.4$) is not equal to the predicted slope $s=2$. On the other hand, it is interesting to note that if we use the relation $L_{C}/L_{s}\propto\left(\nu_{C}/\nu_{s}\right)^{2}$ to fit the data, the best fitting $\left(U_{ext}/\nu_{ext}^{2}\right)_{fit}$ does not significantly depart from the BLR value: $\left(U_{ext}/\nu_{ext}^{2}\right)_{fit}\simeq3.2\left(U_{BLR}/\nu_{BLR}^{2}\right)$ (corresponding to the dashed line in figure \ref{EC3}). \citet{2009MNRAS.399.2041G} and \citet{2010MNRAS.402..497G} modeled the SEDs of the \emph{Fermi} bright blazars in detail and suggest that most blazars are EC dominant \citep[see also][]{2009ApJ...704...38S}. Our result is consistent with that.
\section{Discussion}
Because the sample is small, FSRQs and BL Lacs are combined as a uniform class in our study. Although they divide by any criterion \citep[e.g., the Eddington ratio $\dot{m}\sim0.01$, see][and references therein]{2009MNRAS.396L.105G,2009ApJ...694L.107X}, their properties vary continuously. In discussing \emph{Fermi} detected blazars, people sometimes use terms Low Synchrotron Peaked blazars (LSP), Intermediate Synchrotron Peaked blazars (ISP) and High Synchrotron Peaked blazars (HSP) instead of FSRQs and BL Lacs \citep[e.g.,][]{2009arXiv0912.2040A, 2010ApJ...715..429A}. Throughout this paper we consider them as a single calss. If the sample is enlarged, different subclasses can be separately studied in detail.
Our result of $\log\nu_{s}$ vs. $\log L_{s}$ plane is similar to the result of e.g., \citet{2007Ap&SS.309...63P}. Although the latter study is based on large radio- or X-ray-selected samples, while ours is based on a gamma-ray selected sample, in both of them blazars with low $\nu_{s}$ low $L_{s}$ are presented. In the former study the absence of gamma-ray data does not allow to determine the IC component, therefore the properties of Compton dominance (CD) can not be studied. Their studies and our results indicate that no blazars with high high $\nu_{s}$ high $L_{s}$ have been detected up to now.
\citet{2009MNRAS.396L.105G} \citep[see also][]{2009ApJ...700..597A} studied the \emph{Fermi} bright blazars and showed presence of inverse correlations between $L_{\gamma}$ and $\Gamma_{\gamma}$. As suggested by them, lowering the $\gamma$-ray flux threshold will detect blazars with steep spectral indices and lower luminosities. Here, we notice an interesting thing, which is that if one plots $\log L_{\gamma}$ vs. $\Gamma_{\gamma}$ plane, there is nearly clear inverse correlation \citep[see][]{2009MNRAS.396L.105G, 2009ApJ...700..597A}. While we plot $\log\nu_{s}$ vs. $\log L_{s}$ plane in this paper (see figure \ref{nu_lum}), in addition of inverse correlation, there present some low $\nu_{s}$ low $L_{s}$ blazars. Therefore, when one says the photon index correlates with peak frequency and $\gamma$-ray luminosity correlates with peak luminosity, one should be careful. To check this, we calculate the $\gamma$-ray luminosity of those 43 blazars. The formula we used are similar to that used in \citet{2009MNRAS.396L.105G}. The values are presented in Table \ref{blazar} (see the columns (8) and (9)). We plot $\log L_{\gamma}$ vs. $\Gamma_{\gamma}$ in figure \ref{lum_gamma}. The plane is similar to that in \citet{2009MNRAS.396L.105G}, which shows clear correlation ($p=2.71\times10^{-5}$).
Our results suggest that it is not the beaming effect but the black hole mass accounting for the properties of the low $\nu_{s}$ low $L_{s}$ blazars. In drawing the conclusion, there are caveats should be noted. From figures \ref{beaming_mass_r} and \ref{beaming_mass_cd}, it can be seen that both correlations are not strict, but have large scattering. This means that the beaming effect also can play a certain role although not determines the nature of low $\nu_{s}$ low $L_{s}$ sources. Many radio galaxies are detected by \emph{Fermi}/LAT. Within unified model of radio loud AGN, radio galaxies are the parent population of blazars but with large viewing angle. Figure 24 in \citet{2010ApJ...715..429A} presented the correlation between $\gamma$-ray photon spectral index and $\gamma$-ray luminosity, including the radio galaxies. It can be seen that radio galaxies have lower luminosity and average softer spectra relative to blazars. This is qualitatively consistent with the hypothesis that misaligned sources have lower luminosity and lower peak frequency. Black hole masses of 30 of 43 blazars are collected. These blazars show significant correlation between luminosity $L_{s}$ and black hole mass ($p=3.75\times10^{-4}$, figure \ref{lum_mass}), and also present an inverse correlation between peak frequency $\nu_{s}$ and black hole mass ($p=3.44\times10^{-3}$, figure \ref{nu_mass}). This indicates that the high peak frequency blazars have lower black hole masses. Through the correlation between the black hole mass and $L_{s}\nu_{s}^{0.44}$, we showed that the low $\nu_{s}$ low $L_{s}$ balzars may have smaller black hole masses. The slope ($s=0.44$) is derived from the best fitting. As we showed, the $\log\nu_{s}-\log L_{s}$ plane is more like wedge-shape. The upper boundary of the wedge-shape seems steeper than $s=0.44$ (see figure \ref{nu_lum}). On the other hand, if we linearly fit the $\log\nu_{s}-\log L_{s}$ plane excluding the low $\nu_{s}$ low $L_{s}$ blazars, the fitting slope will be steeper than $s=0.44$. We then choose a steeper slope ($s=0.6$) and correlate the parameter $L_{s}\nu_{s}^{0.6}$ with the black hole mass. The result presents very poor correlation ($p=0.2$). Therefore, it seems that lower black hole mass can account for these low $\nu_{s}$ low $L_{s}$ blazars, but the nature can not be definitely determined. To check the results, larger sample are needed. 1LAC \citep{2010ApJ...715..429A} supplies a huge amount of data, which can help to determine the properties of IC component. The multi-band SEDs can be derived from ground and space observatories. The black hole masses can be derived using a uniform method. The information about quasi-simultaneous SEDs for the latter sample would probably be less complete than for our sample, but its richness will yield interesting results.
If those blazars are really having smaller black hole, this does not support the sequence $\nu_{s}-L_{s}$ inverse correlation, but it is still consistent with the sequence $\gamma_{b}-U_{tot}$ inverse correlation. Here, we call $\nu_{s}-L_{s}$ the phenomenological sequence and $\gamma_{b}-U_{tot}$ the theoretical sequence \citep[see][]{2008MNRAS.387.1669G}. As suggested by \citet{2008MNRAS.387.1669G}, blazars with smaller black hole can have jet energy dissipated within the BLR. Following the theoretical sequence, the high energy electron in jet will suffer larger cooling, and then smaller $\gamma_{b}$. This results in a lower synchrotron peak frequency and lower luminosity. So, our result can be regarded as departure from the phenomenal sequence, but consistent with the theoretical sequence. The $\gamma_{b}-U_{tot}$ relation has different slopes from different studies, range from $1/2$ to $1$ \citep[see][]{2008MNRAS.385..283C, 1998MNRAS.301..451G, 2002A&A...386..833G, 2009MNRAS.399.2041G, 2010MNRAS.402..497G}. The reason accounting for this relation is not clear. \citet{2002A&A...386..833G} suggest that $\gamma_{b}\propto U_{tot}'^{-1}$ implies a constant cooling time at peak frequency, which may correspond to a constant light crossing time. The relation $\gamma_{b}\propto U_{tot}'^{-1/2}$ may denote a constant heating rate \citep[see][]{1999AN....320..232G}.
The correlation between the ratio $r_{Cs}$ and $CD$ is a new result. These two parameters are independent of redshift or beaming effect. They may be related to the jet conditions and radiative processes. Within the leptonic model, the relation between IC and synchrotron components implicates the relative importance of EC to SSC, at least on statistics. Here we gave an explanation: it may be the result of EC dominant. This is consistent with the detailed SED modeling \citep[see][]{2009MNRAS.399.2041G, 2010MNRAS.402..497G}. Some blazars present long term outbursts. Given a blazar, the emission regions of different outburst/quiet states may be surrounded by similar external radiation field, e.g., BLR photons. In this case, the EC and synchrotron emissions will follow the equation \ref{eq_cd_rcs}. For some extreme blazars, e.g., 3C 279, if we have SEDs at different outburst/quiet states, these combining with equation 4 will yield interesting results. The caveat is that the equation is derived from one zone symmetrical model. Enlarging sample to check the above correlation is of course needed.
In summary, we presented the plane $\log\nu_{s}-\log L_{s}$ for bright \emph{Fermi} blazars. The plane shows inverse correlation statistically, but some low $\nu_{s}$ low $L_{s}$ blazars appear. These blazars may be characterized by relatively smaller black hole masses rather than by weaker beaming.
The ratio $r_{Cs}$ correlates with the Compton dominance $CD$. This may indicate that in most blazars the high energy emission is dominated by the External Compton process.
\acknowledgments
We thank the anonymous referee for insightful comments and constructive suggestions. We are grateful to Xinwu Cao, Yi Liu, Hongtao Liu and Fan Li for helpful discussions. We thank supports of the National Natural Science Foundation of China (Grant Nos. 10903025, 10778702, 10973034 and 10833002) and the 973 Program (Grant No. 2009CB824800).
\newpage
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,482
|
Last week, The Vermont School Boards Insurance Trust (VSBIT) and SchoolDude hosted a facility best practices seminar in Berlin, VT for Vermont schools. The day was a huge success with districts from across the state in attendance to learn how to improve preventive maintenance, lower facility costs and enjoy the opportunity to network with neighboring districts.
VSBIT and SchoolDude work closely together to help Vermont districts better operate their facilities for added efficiencies and cost savings. VSBIT offers cost effective and risk management services, including professional development opportunities and consulting in physical plant management, deferred maintenance, playground safety and energy management. As the leader in education operations, SchoolDude solutions fit well with the services VSBIT offers.
Our joint seminar included presentations from VSBIT, SchoolDude and select Vermont school districts to help attendees develop a systematic approach to facilities management, understand the effectiveness of a preventive maintenance program to begin tackling deferred maintenance backlog and tips to implement a comprehensive maintenance program.
Read the top 4 tips for implementing a PM program in your department!
And for more information on the seminar or SchoolDude and VSBIT's relationship, you can contact us here.
Which has a greater impact on student achievement -school funding or condition of school facilities?
How does your school's physical environment affect students?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,140
|
Q: picture cut offs in Wordpress/ not being fit
As you can see in the picture, when I upload my product images to WordPress the head of models cut off and I cannot see the full image in thumbnails. Does anyone have any solution for this issue ?
A: you can set thumnails without crop , just add custom thumnail in function.php and use it , like this :
in function.php ::
add_action( 'after_setup_theme', 'wpdocs_theme_setup' );
function wpdocs_theme_setup() {
add_image_size( 'my-thumb', 50 ); // 50 pixels wide by 50 pixels tall, resize mode (whitout croped)
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,593
|
\section{INTRODUCTION}
Collisionless shocks in supernova remnants (SNRs) are believed to produce the
majority of Galactic cosmic rays (CRs), at least up to the so-called ``knee'' near
$10^{15}$~eV \citep[see][for a recent review]{Hillas2005}. While there is little
doubt from the synchrotron\ interpretation of radio observations that young SNRs produce GeV
electrons, and this is probably true for TeV electrons as well from the
interpretation of nonthermal X-rays, there is as yet no unambiguous direct evidence
that SNRs produce relativistic\ ions.
This is somewhat paradoxical considering that the observed electron to proton ratio
in CRs is $\sim 0.01$ and virtually all models of diffusive particle acceleration in
collisionless shocks, the most cited mechanism for producing CRs, predict that ions
receive far more energy than electrons \citep[see, for example,][and references
therein]{BaringEtal99}. Relativistic electrons, of course, radiate far more
efficiently than do ions, leaving open the possibility that a large majority of the
energy in relativistic\ particles in SNRs lies in hard to see ions.
In this paper, we model SNR evolution coupled with the efficient production of CRs
\citep[our so-called CR-hydro model, e.g.,][]{EDB2004} and make a number of
predictions for the synchrotron\ emission from electrons which will be influenced by the
presence of otherwise unseen relativistic\ ions.
For a recent summary of observations and models of synchrotron\ emission
in SNRs, see \citet{CassamEtal2005} which addresses many of the
issues discussed here using a self-similar approach.
In order to power CRs, the shocks in SNRs must be capable of
placing $\sim 10$\% of the supernova (SN) explosion energy into
relativistic\ ions over their lifetime \citep[e.g.,][]{Drury83,BE87}. In
fact, the strong shocks in young SNRs may be far more efficient
than this \citep[e.g.,][]{Ellison2000,HRD2000,DEB2000} and place
enough energy in relativistic\ particles so that nonlinear feedback
effects modify the shock structure, the evolution of the remnant,
and the radiative properties \citep[e.g.,][]{BEK96,DEB2000}.
As we show below, structural changes produced by DSA translate into changes in synchrotron\
emission that are large enough to be investigated with modern, high spatial
resolution, radio and X-ray observatories.
In particular, we calculate the synchrotron\ emission profiles for typical shell-type Ia and
II supernova parameters and show how these profiles provide important constraints on
the underlying particle acceleration mechanism and magnetic field structure.
Particle acceleration
influences the SNR {evolution}
because relativistic\ particles
produce less pressure for a given energy density than do non-relativistic\
particles.\footnote{This follows since the ratio of specific heats, $\Gamma$,
decreases as particles become relativistic\ and the pressure $P=(\Gamma -1)e$, where $e$ is
the energy density.}
Therefore, when relativistic\ particles are produced and/or energetic particles escape from
the shock system, the shocked gas becomes more compressible,
i.e., it acts as if it has a softer equation of state and the remnant hydrodynamics
are modified.
The softer effective equation of state means that compression ratios well in excess
of four can be produced in non-radiative, collisionless shocks
\citep[e.g.,][]{BE87,JE91}, and since the energy going into relativistic\ particles is drawn
from the shock-heated thermal population, the temperature of the shocked gas can be
much less than that expected from test-particle\ shock acceleration
\citep[e.g.,][]{Ellison2000,DEB2000}.
In addition to modifying the
{evolution}
and the temperature of the shocked gas,
changes in the compression of the fluid should result in changes in the compression
of the magnetic field implying that synchrotron\ emission from relativistic\ electrons will
vary strongly with the efficiency of DSA and the orientation and strength of the
magnetic field.
Perhaps the most important morphological aspect of this CR-hydro coupling is that the
ratio of the forward shock radius, $R_\mathrm{FS}$, to the radius of the contact discontinuity, $R_\mathrm{CD}$, may be
much less than in the test-particle\ case \citep[see][]{DEB2000,EDB2004}.
If, as is generally believed, shocks put far more energy into accelerated ions than
electrons, it is the efficient production of cosmic ray {\it ions} that reduces
$R_\mathrm{FS}/R_\mathrm{CD}$ from test-particle\ values. However, since the interaction region between the forward
shock (FS) and the contact discontinuity\ (CD) can be sometimes estimated or determined with modern
X-ray telescopes (SN 1006 is an example where the CD is not seen), radiating
electrons can reveal the presence of these otherwise unseen relativistic\ ions.
Another clear morphological prediction from efficient DSA discussed below is that
radial profiles of X-ray emission will be strongly peaked and form sheet-like
structures at the FS. This effect comes largely from the large shock compression
ratios which compress the magnetic field behind the FS and result in severe radiative
losses for electrons producing X-rays. Without efficient particle acceleration, the
radial profiles of X-rays will be smoother and more closely resemble those for radio
emission.
In addition to the radio and X-ray profiles in the interaction region between the CD
and the FS, we calculate the emission in the FS precursor. We show that the structure
of the X-ray precursor depends strongly on assumptions made for the magnetic field
compressibility. If the magnetic field is compressed substantially at the FS, as is
likely, the ratio of X-ray intensity immediately upstream of the shock to that at the
FS drops dramatically. In this case, line-of-sight projection effects produce
profiles that are fully consistent with the extremely short scale heights seen in SN
1006 by \citet{BambaEtal2003} or \citet{LongEtal2003}, even though TeV electrons with
long diffusion lengths are present.
We conclude for this particular remnant, as did \citet{BKV2003}, that CR ions are
being efficiently produced and their presence is revealed by radiating electrons.
{We note that the strong magnetic fields we describe at the FS are produced by
compression not from magnetic field amplification resulting from cosmic-ray streaming
instabilities, such as predicted by \citet{BL2001}. Magnetic field amplification at
the FS is not included in our model.}
\section{CR-HYDRO MODEL}
We calculate the hydrodynamic evolution of a SNR coupled to efficient DSA with a
radially symmetric model described in detail in \citet{EDB2004} and references
therein. We do not consider CR production at the reverse shock since we assume the
magnetic field in the ejecta is the frozen-in field from the SN progenitor and, as
such, will be too small to produce significant particle acceleration or non-thermal
emission without large enhancement factors \citep[see][for a discussion of efficient
DSA at reverse SNR shocks]{EDB2005}.
Any realistic model of a SNR will have several parameters for both the environment
and the physical processes controlling the evolution and particle acceleration. Here,
we concentrate on changes in the SNR evolution and emission produced by CR
production, and choose two fairly distinct models as prototypes, one with parameters
typical of type Ia SNe\ and the other with parameters likely those of type II
SNe.
These models differ by the initial density profile in the ejecta\footnote{Since we
don't consider acceleration at the reverse shock, the different ejecta composition in
type Ia and type II SNe\ is not important. For a discussion of how composition might
influence DSA, see \citet{EDB2005}.}
and the density and magnetic field profiles in the ambient medium.
Within these models, we investigate the effects of varying the CR production
efficiency and the magnetic field structure.
\subsection{Type Ia prototype}
For our type Ia prototype, we assume the density profile of the ejecta material is
exponential \citep{DC98}, the total ejecta mass is $\Mej=1.4\,\Msun$, the explosion
energy is $\EnSN=10^{51}$ erg, and a uniform ambient medium density, $n_p$, with a
temperature of $T_0 = 10^4$ K. Here, $n_p$, is the proton number density and we
assume there is an additional 10\% contribution of helium nuclei.
We assume the magnetic field in the interstellar medium (ISM), $B_0$, is also
constant and take $B_0=10^{-5}$ G as a default value. We typically view our type Ia
models at an age $\tSNR=400$ yr, similar to the age of Tycho's SNR, when the shock
speed is roughly $4000$\,km s$^{-1}$.
\subsection{Type II prototype}
For our type II prototype, we assume the initial density profile of the ejecta
material is a power law in radius, $\rhoEj \propto r^{-n}$, with a constant density
plateau region at small radii \citep[e.g.,][]{Arnett88}. We take $n=9$ in all of our
type II models. For the total ejecta mass we take $\Mej=2\,\Msun$,
and the explosion
energy is set to $\EnSN=3\xx{51}$ erg \citep{LH03, CO03}. The density of the pre-SN
wind is taken as $\rhoWind = A r^{-2}$, where $A=dM/dt/(4 \, \pi \, v_\mathrm{w})$, $dM/dt$
is the mass loss rate, and $v_\mathrm{w}$ is the wind speed (both assumed constant). We
use typical values $v_\mathrm{w}=20$ km s$^{-1}$, $dM/dt = 2\xx{-5}\,\Msun$ yr$^{-1}$
\citep{CO03}, and take a constant wind temperature $\Twind=10^4$ K.
Following \cite{CL94}, we assume the unshocked magnetic field in the
pre-SN wind is
\begin{equation} \label{Bwind}
B_{0}(r) = \left( \sigma_\mathrm{w} \, v_\mathrm{w} \, dM/dt \right)^{1/2} /
\, r \ ,
\end{equation}
or
\begin{eqnarray}
B_{0}(r) &=& 2.6 \left( \frac{\sigma_\mathrm{w}}{0.1} \right)^{1/2}
\left( \frac{v_\mathrm{w}}{10 \, \mathrm{km/s}} \right)^{1/2}\,
\\ \nonumber & \times & \left( \frac{dM/dt}{10^{-5} \, \mathrm{M}_{\odot}\, \mathrm{/
yr}}
\right)^{1/2} \, \left( \frac{r}{1 \, \mathrm{pc}} \right)^{-1}\,
\mu\mathrm{G}
\ ,
\end{eqnarray}
where $\sigma_\mathrm{w}$ is the constant ratio of magnetic field energy density to
kinetic energy density in the wind.
This expression assumes that the magnetic field is frozen in the constant stellar
wind and is only valid in the equatorial plane for distances $r$, much greater than
the radius of the pre-SN star.\footnote{ {Equation~(\ref{Bwind}) only applies if
the forward shock has not reached the stellar wind termination shock. We assume the
forward shock is within the bubble in all of the examples discussed here. } }
Off the plane, $B(r)$ will fall off more rapidly than $1/r$,
but we ignore this effect in our spherically symmetric models. The
value of $\sigma_\mathrm{w}$ for stars other than the sun is not well known
but, for concreteness, we take $\sigma_\mathrm{w}=0.1$.
We typically view our type II models at $\tSNR=400$ yr, to match our type Ia models
and for comparison to SNR Cassiopeia A (Cas A), when the shock speed is roughly
$6000$\,km s$^{-1}$.
\subsection{Acceleration model}
For the diffusive shock acceleration process, we use the algebraic model of
\citet{BE99} and \citet{EBB2000} where the injection efficiency is parameterized and
the superthermal spectrum, $f(p)$, is a broken power law, $f_\mathrm{PL}(p)$, with an exponential
turnover at high momenta, $f(p) \propto f_\mathrm{PL}(p) \exp{(-p/p_\mathrm{max})}$.
The algebraic model solves the nonlinear\ DSA problem at each time step of the hydro
simulation given the shock speed, shock radius, ambient density and temperature, and
ambient magnetic field determined in the simulation. With the accelerated
distribution, an effective ratio of specific heats is calculated and used in the
hydrodynamic equations, completing the coupling between the two \citep[see][for a
full discussion]{EDB2004}.
The injection parameter,
$\etainj$, is the fraction of total protons injected into the DSA process and values
$\etainj \gtrsim 10^{-4}$ typically yield efficient particle acceleration rates where
$10 \%$ to $99\%$ of the available energy flux goes into relativistic\ protons.\footnote{Note
the difference between the fraction of protons injected into the acceleration
process, $\etainj$, and the acceleration efficiency. The acceleration efficiency is
the fraction of total energy flux going into relativistic\ particles including all ions and
electrons. Given $\etainj$ and the other shock parameters, the electron spectrum is
determined with two additional parameters, the electron to proton ratio at relativistic\
energies, $(e/p)_{\mathrm{rel}}$, and the electron to proton temperature ratio immediately behind
the shock, $T_e/T_p$ \citep[see][for a full discussion]{EBB2000}.}
The maximum momentum, $p_\mathrm{max}$, is determined by setting the acceleration time equal
to the SNR age $\tSNR$ or, by setting the diffusion length of the highest energy
particles equal to some fraction, $f_\mathrm{sk}$, of the shock radius $R_{\mathrm{sk}}$, whichever gives
the lowest $p_\mathrm{max}$ \citep[see, for example,][]{BaringEtal99}.
In all of the models presented here we take $f_\mathrm{sk}=0.05$. We assume Bohm diffusion so
that the scattering mean free path, $\lambda$, is on the order of the gyroradius,
$r_g$, i.e., $\lambda$ = $\eta_\mathrm{mfp} r_g$ with $\eta_\mathrm{mfp}= 1$ and $r_g=pc/(qB)$. Here,
$p$ and $q$ are the particle momentum and charge, respectively, $B$ is the magnetic
field at the acceleration site, and $c$ is the speed of light.
Note that while our estimate of $p_\mathrm{max}$ requires a specific assumption for the mean
free path, the acceleration model itself only assumes that the scattering mean free
path is a strongly increasing function of momentum. In the absence of radiative
losses, the maximum kinetic energy particles receive in DSA depends only on the
particle charge and $p_\mathrm{max}$ is the same for protons and electrons as long as both
are relativistic.
\subsection{Synchrotron emission and losses}
As the forward shock overtakes fresh ambient medium material, the shock accelerates
these particles and produces a nonthermal distribution as described in detail in
\citet{EDB2004} and \citet{EDB2005}.\footnote{We ignore pre-existing CRs and inject
and accelerate only thermal particles over taken by the shock.}
Once the particle distribution is produced in a shell of material at the shock, it is
assumed to remain in that shell as the shell convects and evolves behind the shock.
During the evolution, particles experience adiabatic and synchrotron\ losses and these
losses are calculated as in \citet{Reynolds98}.
In calculating the synchrotron\ emission and losses, we evolve the magnetic field as
described, for example, in \citet{RC81} or \citet{Reynolds98}. Consider a fluid
element which is now at position $r$ with density $\rho(r)$.
At an earlier time, this fluid element was shocked at a position $r_i$ where the
density immediately behind the shock was $\rho_2$. The radial and tangential
components of the field immediately behind the shock at $r_i$, were $B_{2r}$ and
$B_{2t}$, respectively.
If the magnetic flux is frozen in the fluid, the field at the downstream position,
$r$, is given by
\begin{equation} \label{Bevo}
B^2(r) = B^2_{2r} \left ( \frac{r_i}{r} \right )^4 +
B^2_{2t} \left ( \frac{\rho(r)}{\rho_2} \right )^2
\left ( \frac{r}{r_i} \right )^2
\ .
\end{equation}
For the magnetic field configuration across the shock, we assume either that
$B_2=B_0$, as in a parallel shock, or that the field is fully turbulent upstream and,
following \citet{VBKR2002}, set the immediate downstream magnetic field
\begin{equation} \label{B_comp}
B_2= \sqrt{1/3 + 2 \Rtot^2/3}~B_0
\ ,
\end{equation}
where $\Rtot$ is the shock compression ratio.\footnote{Here and
elsewhere the subscript 0 (2) indicates values immediately ahead of (behind) the
shock.}
{Note that $B_2$ does not include any amplification effects such as described by
\citet{BL2001}.}
Using $B(r)$ obtained in eq.~(\ref{Bevo}), the evolution of the electron distribution
under combined adiabatic and synchrotron\ losses is calculated and, at the end of the
simulation, the synchrotron\ emission in each shell is determined as in
\citet{BaringEtal99}.\footnote{In calculating electrons losses, we include inverse-Compton\
losses off the cosmic microwave background radiation as described in
\citet{BaringEtal99}. For protons, radiative losses are unimportant for typical SNR
magnetic fields.}
In Fig.~\ref{fp_No_loss} we show electron momentum phase-space distribution
functions, $f(p)$, for a type Ia SNR model discussed more fully in
Section~\ref{results} below. In each panel, the dashed curve is the distribution
calculated immediately after production at the age indicated (i.e., at $t_\mathrm{shock}$) and
the solid curve is this distribution at the end of the simulation (i.e., at
$\tSNR=1000$ yr) after experiencing adiabatic and synchrotron\ losses. In the top two
panels, the dot-dashed curves show the electron distribution at $\tSNR=1000$ yr when
only adiabatic losses are included.
The shock accelerated distribution, before losses, is a broken power law above a
thermal distribution with an exponential cutoff at the maximum momentum \citep[i.e.,
eq.~12 in][with $\alpha=1$]{EBB2000}.
Adiabatic losses affect all particles
(shifting the entire distribution to lower momenta, i.e., $p
\propto \rho^{1/3}$), while synchrotron\
influence mainly the
highest energy electrons. For the parameters of this model, the
highest momentum electrons accelerated at early times are strongly
depleted and a distinct synchrotron\ bump is observed just below the
sharp maximum momentum cutoff.
The heavy-weight dotted curve in the bottom panel of Fig.~\ref{fp_No_loss} is the
electron distribution at the end of the simulation summed over the interaction region
between the contact discontinuity\ and the forward shock. For comparison we show with the light-weight
dotted curve the summed proton distribution at the end of the simulation. For this
example, the electron to proton ratio at relativistic\ energies, $(e/p)_{\mathrm{rel}}$, is set to 0.01,
similar to that of Galactic cosmic rays, and the electron to proton temperature ratio
immediately behind the shock, $T_e/T_p$, is set to 1 \citep[see][for fuller
discussion of these parameters]{EBB2000}. The difference between the electron and
proton spectra in the bottom panel of Fig.~\ref{fp_No_loss} illustrates how DSA
typically puts far more energy into protons than electrons.
\subsection{Upstream precursor}
{The algebraic acceleration model of \citet{BE99} doesn't explicitly include the
geometry of the shock precursor. However, we can estimate the precursor upstream of
the forward shock in the following way.}
{At any particular time,}
the proton distribution in the outer most shell, $f_\mathrm{p}(p)$, produces the precursor.
We assume that the protons of momentum $p$ in this distribution ``feel'' a flow speed
$u(z)$ and magnetic field $B(z)$, where $z$ is the diffusion length, $L_D(p)$, measured
upstream from the FS. The diffusion length $L_D(p) = \kappa(p)/u(z)$, where
$\kappa=\lambda v/3$ is the diffusion coefficient, $v$ is the particle speed, and
$u(z)$ is the flow speed at $z$ measured in a frame at rest with the shock.
We use information from $f_\mathrm{p}(p)$ to estimate $u(z)$ and $B(z)$ and obtain
$L_D(p)$. Because of shock smoothing, the compression ratio in the FS that produced
$f_\mathrm{p}(p)$ ranges from the subshock compression, $\Rsub$, felt by protons with the
superthermal injection momentum $p_\mathrm{inj}$, to the overall compression, $\Rtot$, felt by
protons with $p_\mathrm{max}$. Intermediate values of compression, $r(p)$, felt by protons or
electrons with momentum $p$ between $p_\mathrm{inj}$ and $p_\mathrm{max}$, can be estimated with a
linear extrapolation between $r(p)$ and $\log{(pv)}$, i.e.,
\begin{equation} \label{eff_comp}
r(p) = \Rsub + G(p) \; (\Rtot - \Rsub) \ ,
\end{equation}
where $p v$ is proportional to the diffusion length and
\begin{equation}
G(p) = \frac{\log{(pv)} - \log{(pv)_\mathrm{inj}}}
{\log{(pv)_\mathrm{max}} - \log{(pv)_\mathrm{inj}}}
\ .
\end{equation}
Here $(pv)_\mathrm{max}= p_\mathrm{max} \, c$, $(pv)_\mathrm{inj}= p_\mathrm{inj} \, v_\mathrm{inj}$, and
$v_\mathrm{inj}$ is the particle speed corresponding to $p_\mathrm{inj}$. Note that since $p_\mathrm{inj}$ and
$\etainj$ combine to give a single free injection parameter, the specific value of
$p_\mathrm{inj}$ is unimportant for the results discussed here \citep[see][for recent work on
injection in a semi-analytic, nonlinear DSA model]{BGV2005}.
With equation~(\ref{eff_comp}), we estimate the flow speed felt by a particle with
momentum $p$ as
\begin{equation}
u(z) = V_{\mathrm{sk}} \frac{r(p)}{\Rtot}
\ ,
\end{equation}
and the magnetic field felt by this particle is either
\begin{equation} \label{noBcomp}
B(z) = B_0
\end{equation}
or
\begin{equation} \label{Bcomp}
B(z) = B_0 \sqrt{ \frac{1}{3} + \frac{2}{3} \left ( \frac{\Rtot}{r(p)} \right )^2}
\ ,
\end{equation}
depending on if the magnetic field is compressed in the precursor (as in
Eq.~\ref{B_comp}) or not. Here $V_{\mathrm{sk}}$ is the forward shock speed in the rest frame of
the SN.
Given $u(z)$ and $B(z)$, the diffusion length of an electron can be determined and,
in a fashion similar to \citet{Reynolds98}, we assume that
electrons of momentum $p$ are distributed upstream from the shock
such that
\begin{equation}
f_\mathrm{e}(p,z) = f_\mathrm{e}(p,0) \exp{ \{ -z[1/L_D(p) + 1/(f_\mathrm{sk} R_\mathrm{FS})] \} }\ ,
\label{fp_prec}
\end{equation}
where $f_\mathrm{e}(p,0)$ is the electron distribution in the outer most shell ($z=0$) at the end
of the simulation and $f_\mathrm{sk} R_\mathrm{FS}$ sets the distance ahead of the shock where
particles freely leave the system. The electron distribution, $f_\mathrm{e}(p,0)$, contains the
effects of synchrotron\ and inverse-Compton\ losses which occur during acceleration.
The above relations are approximations in that they ignore the precise form for the
smooth precursor flow speed. However, we have verified that the precursor emission is
relatively insensitive to this smoothing and that our approximations
adequately describe the spatial dependencies important for
predicting the synchrotron\ precursor. Typical results are shown in
Fig.~\ref{precursor} where the solid curves are for compressed $B$
and the dotted curves are for uncompressed $B$.
\section{RESULTS} \label{results}
\subsection{Radial emission}
Using the parameters for our type Ia prototype, we plot in Fig.~\ref{Ia_stack} the
synchrotron\ emission as a function of radius for one radio (1-1.4 GHz; solid curves) and
two X-ray bands (0.1-1 keV dashed curves; 1-10 keV dotted curves). We present four
models, two with $\etainj=10^{-3}$, which produces very efficient DSA with nearly
100\% of the energy flux crossing the shock going into relativistic\ particles, and two with
$\etainj=10^{-5}$, which yields essentially a test-particle\ result with less than 1\% of the
energy flux going into CRs and where the influence of shock accelerated protons on
the hydrodynamics is small. For each $\etainj$ we show a case with a compressed field
(labelled B comp.) and one with uncompressed field either in the shock or the
precursor (labelled $B_2=B_0$). In the compressed field case, we assume, as in
\citet{BKV2002}, that the magnetic field is fully turbulent upstream of the shock and
is compressed in the precursor as described by equation~(\ref{Bcomp}).
The curves are normalized to each energy band's flux at the forward
shock.\footnote{The results of the CR-hydro model, at early times, depend on the
initial conditions which, unavoidably, are somewhat arbitrary. The initial
conditions, in turn, influence the emission at the CD seen in Figs.~\ref{Ia_stack}
and \ref{II_stack_10yr}. For all of the results presented here, the simulation is
started at a time $t_0=10$\,yr with the initial ejecta speed varying linearly
with radius from zero to a maximum speed $V_{\mathrm{max}}^{\mathrm{ej}}=0.1c$. The initial maximum radius of
the ejecta is set by $V_{\mathrm{max}}^{\mathrm{ej}}$ and $t_0$ and the early stages of the simulation,
and therefore the synchrotron\ emission at the CD, depend on $V_{\mathrm{max}}^{\mathrm{ej}}$ and $t_0$. Of
course, the later evolution of the SNR is nearly independent of the starting
conditions, as long as the total kinetic energy and ejecta mass stay the same. Since
the X-ray emission is dominated by losses at the CD, it is only the radio emission at
the CD that depends strongly on $V_{\mathrm{max}}^{\mathrm{ej}}$ and $t_0$. For a full discussion of the
start up conditions for the CR-hydro model, see \citet{EDB2004}.}
\newlistroman
Fig.~\ref{II_stack_10yr} shows similar results for our type II prototype where, as in
Fig.~\ref{Ia_stack}, the emission is viewed at $\tSNR=400$ yr.
Comparing these figures, we note the following:
\listromanDE The two SN types have very similar profiles
at least for the parameters
used here.
One noticeable difference occurs for the $\etainj =10^{-5}$ cases where the
type II radio profiles are flatter than the type Ia profiles.
Later, in association with Fig.~\ref{Ia_II_inj}, we show in more detail
that changes in $\etainj$ and other parameters influence the
SN types rather differently and may offer help in distinguishing
the types.
\listromanDE In the interaction region between the contact discontinuity and the
forward shock, the X-ray synchrotron\ falls off more rapidly than the radio emission.
As mentioned in discussing Fig.~\ref{fp_No_loss}, the electrons producing the
radio emission suffer only adiabatic losses, while the higher energy electrons
producing the X-rays suffer adiabatic losses combined with synchrotron\ and inverse-Compton\ losses.
In Fig.~\ref{Xray_No_syn} we show profiles for the 1--10 keV band with no losses
(solid curve), with just adiabatic losses (dashed curve), and with adiabatic plus
radiative losses (dotted curve).
Since, for typical SNR parameters, the nonthermal X-ray emission comes from the
exponential part of the electron spectrum, the X-ray emission will be extremely
sensitive to changes in the spectrum coming from any type of loss mechanism.
\listromanDE The radio emission can have a secondary peak at the
CD, while the X-ray emission, with synchrotron\ losses, always drops
precipitously at the CD. As just mentioned, the radio emission at
the CD is sensitive to the starting conditions of the hydro model
but, in any case, the secondary peak is less noticeable
in projection as we show below.
\listromanDE With efficient DSA and a compressed magnetic field (top panels of
Figs.~\ref{Ia_stack} and \ref{II_stack_10yr}), the X-ray fall-off is extremely rapid
and the X-ray emission can appear as an extremely thin sheet at the FS.
\listromanDE The precursor outside of the FS falls slowly if the magnetic field is
not compressed at the shock, but drops sharply immediately upstream of the shock when
$B$ is compressed, with or without efficient DSA (top two panels in
Figs.~\ref{Ia_stack} and \ref{II_stack_10yr}). The sharp drop due to the compressed
field will make the X-ray precursor faint and difficult to detect compared to the
emission at the FS. Without compression, the precursor should be observable,
providing an important diagnostic for the magnetic
field structure. Note that the radio precursor has an extremely short upstream
diffusion length for all cases and will not be detectable if the diffusive length
scale is anywhere near as small as we predict.
\listromanDE Comparing the $\etainj=10^{-3}$ panels against the $\etainj=10^{-5}$
panels in Figs.~\ref{Ia_stack} and \ref{II_stack_10yr} shows that the distance
between the CD and the FS is nearly a factor of two greater in the test-particle case
than with efficient DSA. Since the limit of the shocked ejecta gives an idea of the
position of the CD, $R_\mathrm{FS}/R_\mathrm{CD}$ is measurable in several young SNRs with {\it Chandra} and
{\it XMM-Newton}, making this morphological difference a powerful diagnostic for
efficient DSA.
In Fig.~\ref{B_Rtot} we show the magnetic field structure, at $\tSNR=400$\,yr, in the
transition region between the CD and FS for our two prototypes with compressed $B$
and $\etainj=10^{-3}$. The numbers at specific points on the curves indicate the
compression ratio, $\Rtot$, at the FS at the time that particular parcel of gas was
shocked. It is notable that $\Rtot \gg 4$ in all cases. The difference in $\Rtot$
between the two models comes about mainly from the lower magnetic field in the pre-SN
wind for the type II model which results in larger compression ratios.
The end of the curves, marked with an open circle, show the immediate upstream,
unshocked magnetic field, $B_0$, at $\tSNR$. For type Ia, $B_0=10$\,$\mu$G\ and is
independent of time, while for type II, $B_0(r)$ falls off with radius as in
equation~(\ref{Bwind}) and at $\tSNR=400$\,yr is $\simeq 1.5$\,$\mu$G. A thorough
discussion of the influence magnetic field strength has on $\Rtot$ is given in
\citet{EDB2005}.
\subsection{Line-of-sight projections}
In Fig.~\ref{Ia_LOS} we show line-of-sight projections for some of the results shown
in Fig.~\ref{Ia_stack}. Even in projection, it is clear that the radio emission
falls off less rapidly behind the FS than the X-ray emission. Projection has little
effect on the upstream precursor so the large differences seen in Fig.~\ref{Ia_stack}
with and without magnetic field compression are similar in projection.
The decrease in $R_\mathrm{FS}/R_\mathrm{CD}$ for efficient particle acceleration is less obvious in
projection but, since the CD generally shows up via thermal X-ray emission, $R_\mathrm{FS}/R_\mathrm{CD}$
remains an important diagnostic for the presence of efficient CR ion acceleration.
Line-of-sight projections of the results shown in Fig.~\ref{II_stack_10yr} are
similar.
An important feature that is in the line-of-sight projections and not in the radial
profiles is the offset of radio and X-ray peaks at the FS. In Fig.~\ref{Ia_LOS_FS},
the projections for the type Ia models of Fig.~\ref{Ia_stack} with compressed
magnetic fields are plotted as a fraction of the FS radius. With or without efficient
DSA, the radio peak (solid curve) occurs inside the X-ray peaks. Behavior such as
this is observed in several SNRs including G347 \citep[][]{Lazendic2004}, Kepler
\citep[][]{DeLaneyEtal2002}, Tycho \citep[][]{DecourchelleEtal2001}, and Cas A
\citep[][]{LongEtal2003}. We note, however, that there is another radio peak
coincident with the X-ray peak in Tycho \citep[e.g.,][]{DickelEtal91}.
For the efficient acceleration case (top panel), the two X-ray
peaks are also well separated.
Note also that because of projection effects, the maximum emission occurs inside of
the FS.
As emphasized by \citet{BKV2003}, care must be taken not to interpret the peak
emission as the position of the FS, as done by \citet{BambaEtal2003} for SN 1006.
The actual upstream precursor is indicated in Fig.~\ref{Ia_LOS_FS} with a ``P.''
In Fig.~\ref{Ia_II_inj} we compare the line-of-sight 1-10 keV X-ray projections for
both type Ia and type II prototypes calculated with different DSA injection
efficiencies.
While the absolute normalization is arbitrary, the curves show the correct relative
normalization between the various models and, as expected, the test-particle\ cases with
$\etainj=10^{-5}$ have lower absolute emissivities.
In both panels, the solid curves have $\etainj=10^{-3}$, the dashed curves have
$\etainj=10^{-4}$, the dotted curves have $\etainj=10^{-5}$, and all models have
magnetic field compression (note the different vertical scales in the two panels).
For both SN types, the ratio $R_\mathrm{FS}/R_\mathrm{CD}$ increases noticeably as the acceleration
becomes less efficient, but $R_\mathrm{FS}/R_\mathrm{CD}$ increases somewhat more rapidly for type II
SNRs.
Also, for both SN types, the morphology of the X-ray emission varies strongly with
$\etainj$: for efficient DSA, there is a pronounced peak at the rim, while the emission
is much broader for inefficient DSA. This difference offers another important
diagnostic for efficient DSA.
In Fig.~\ref{Mdot} we keep all parameters of our $\etainj=10^{-3}$ type II model
constant except the wind speed, $v_\mathrm{w}$, and the mass loss rate, $dM/dt$. In the
top panel, $v_\mathrm{w}=20$\,km s$^{-1}$\ and $dM/dt$ varies, as indicated,
and the light-weight dashed curve has $\etainj=10^{-4}$; all other curves in
Fig.~\ref{Mdot} have $\etainj=10^{-3}$.
As $dM/dt$ increases, there is an increase in $R_\mathrm{FS}/R_\mathrm{CD}$ indicating, among other
things, that self-similarity is no longer a good approximation at $\tSNR=400$\,yr.
In the bottom panel, $dM/dt = 2\xx{-5}\,\Msun$ yr$^{-1}$ and $v_\mathrm{w}$ is varied as
indicated. Now, the profiles are relatively insensitive to the changes in $v_\mathrm{w}$,
suggesting that self-similarity does apply.
In considering Figs.~\ref{Ia_II_inj} and \ref{Mdot} it's important to note that while
$R_\mathrm{FS}/R_\mathrm{CD}$ is reduced substantially with efficient CR production in type Ia SNRs,
values of $R_\mathrm{FS}/R_\mathrm{CD} > 1.3$ can occur in type II SNRs with very efficient DSA.
The acceleration efficiency for the $\etainj=10^{-4}$ model in Fig.~\ref{Mdot}
(light-weight dashed curve) is
greater than 50\% over most of its 400\,yr lifetime. This may be relevant for
remnants like Cas A and 1E0102.2-7219 which show $R_\mathrm{FS}/R_\mathrm{CD} \sim 1.4$.
\subsection{Radio emission vs. ejecta profile and age}
It is well known that young SNRs with power-law ejecta and power-law ambient medium
density profiles have self-similar solutions if CR production is absent or
unimportant \citep[i.e.,][]{ch82,Chev82Let}. This will be true for the efficient
production of CRs as well if the CR production is time invariant \citep[][]{ch83}.
If nonlinear DSA occurs and the acceleration efficiency varies with time, the
self-similarity is broken \citep[see][]{EDB2004}, as is the case with an exponential
ejecta density distribution \citep[e.g.,][]{Dwarkadas2000}, or for a power-law ejecta
distribution once the reverse shock enters the plateau region of the ejecta.
In Fig.~\ref{radio_age} we show radio emission profiles at various $\tSNR$ for type Ia
models with $\etainj=10^{-3}$ having exponential (top panel) and power law (bottom
panel) ejecta density profiles. In self-similar\ evolution, the ratio $R_\mathrm{FS}/R_\mathrm{CD}$ remains
constant and this is approximately the case for a power-law ejecta density profile
for $\tSNR \lesssim 300$ yr. At later times, the self-similarity is broken, as is
the case at all times for exponential ejecta density profiles. The light-weight solid
curves are test-particle\ profiles at 150 yr for comparison.
Besides $R_\mathrm{FS}/R_\mathrm{CD}$, the structure of the radio emission in the interaction region
between the CD and the FS depends on the assumed ejecta distribution and on the age
of the SNR. At early times for the power-law case (solid curve, bottom panel of
Fig.~\ref{radio_age}), the radio emission peaks near the contact discontinuity. This result is
consistent with the self-similar model described in \citet{CassamEtal2005} but, as
discussed above, depends somewhat on the starting conditions of the CR-hydro model.
At later times the emission drops inside the FS and, as expected, the details of the
ejecta profile cease to be important.
The curves for 1-10 keV X-rays are not shown, but due to radiative losses and
contrary to the radio, they peak strongly just behind the FS for all $\tSNR$ as shown
in Fig.~\ref{Ia_LOS_FS}.
\subsection{Acceleration efficiency}
In Fig.~\ref{CR_eff} we show the acceleration efficiency, i.e., the fraction of
energy flux crossing the shock that goes into relativistic\ ions \citep[see eq.~13
of][]{EBB2000}, for various $\etainj$ (light-weight curves) and the fraction of total
SN explosion energy put into CRs, $E_\mathrm{CR}/E_\mathrm{SN}$, for $\etainj=10^{-4}$ (heavy-weight
dashed curves). These models use our type Ia and II prototype parameters.
For the, perhaps, extreme case of $\etainj=10^{-3}$, the fraction of bulk flow energy
flux (in the shock rest frame) that is placed in relativistic\ ions is $>80\%$ during the
1000 yr span shown for both SNR prototypes. Even for $\etainj=10^{-4}$, the
efficiency is $> 10\%$ most of the time and more than 10\% of the total SN explosion
energy can be put into CRs over the 1000 yr lifetime.
Of course the actual injection efficiency of SNR shocks is uncertain and, as noted by
\citet{VBK2003}, injection may vary over the surface of the SNR and be significantly
less where the magnetic field is highly oblique \citep[see][for a discussion of
parallel versus oblique shock geometry in SN
1006]{RothenflugEtal2004}. \citet{VBK2003} estimate that to supply the galactic CR
population the overall efficiency need only be $\sim 20$\% of the maximum values
obtained by DSA. \citet{Dorfi90} and \citet{BEK96} obtained similar values.
Nevertheless,
if the shocks in supernova remnants accelerate cosmic ray {\it ions} this efficiently via
diffusive shock acceleration, clear signatures of the
acceleration will be present in the radiation produced by {\it electrons}.
\section{DISCUSSION}
\subsection{Narrow interaction region}
Perhaps the most unambiguous indication of efficient CR production in SNRs is an
interaction region between the contact discontinuity\ and the forward shock which is considerably
narrower than predicted without efficient acceleration \citep[e.g.,][]{BE2001}. While
the ratio $R_\mathrm{FS}/R_\mathrm{CD}$ depends on various parameters, efficient DSA can easily result in
the FS being less than half the distance ahead of the CD predicted with test-particle\
acceleration (see Figs.~\ref{Ia_stack}, \ref{II_stack_10yr}, and \ref{Ia_II_inj}).
This may explain observations of $R_\mathrm{FS}/R_\mathrm{CD}$ which are considerably less than the
smallest value predicted by test-particle, self-similar models, as is the case for Tycho's
\citep[e.g.,][]{ReynosoEtal1997,DecourchelleEtal2001} and Kepler's
\citep[e.g.,][]{DeLaneyEtal2002,cad04a} SNRs.
Even in SNRs such as Cas A and 1E0102.2-7219 in the Small Magellanic Cloud
\citep[e.g.,][]{GotthelfEtal2001,GaetzEtal2000,HRD2000}, where the FS and CD are well
separated, DSA may be quite efficient. As shown in Figs.~\ref{Ia_II_inj} and
\ref{Mdot}, moderately efficient acceleration and/or the presence of a pre-SN wind
can result in $R_\mathrm{FS}/R_\mathrm{CD} \gtrsim 1.3$. Thus, while the observation of $R_\mathrm{FS}/R_\mathrm{CD} =
1.0-1.1$ can be explained naturally if CR ions are being produced efficiently in type
Ia SNe, larger values of $R_\mathrm{FS}/R_\mathrm{CD}$ do not necessarily exclude efficient acceleration
but may be representative of type II SNe\ with pre-SN winds.
\subsection{Precursor and small-scale structure}
In some SNRs extremely small spatial scales in X-ray emission are
observed at the FS. Using {\it Chandra} observations,
\citet{LongEtal2003} and \citet{BambaEtal2003} have independently
examined emission profiles in several thin filaments in projection
in the northeast shell of SN 1006 which show scale lengths as
short as $0.04$ pc (assuming a distance to the SNR of $\sim 2$\,kpc).
In Fig.~\ref{Bamba} we compare our type Ia prototype model with $\etainj = 10^{-3}$
to the SN 1006 observations.
We represent the observations with dashed lines which roughly span
the maximum and minimum scale heights determined by
\citet{BambaEtal2003} (see their Table~4).
Even though we have not
attempted a detailed fit to SN 1006, it's clear that our
compressed $B$ model (solid curve) matches the overall
observations quite well and the shortest scale heights are
extremely well modeled.
As emphasized by \citet{BKV2003}, the shortest scale heights occur inside the forward
shock and are produced by projection effects when $B$ is compressed and there is a
sharp drop in emissivity behind the shock. The actual upstream precursor (indicated
with a ``P'' in Fig.~\ref{Bamba}) has a much longer scale height as expected from TeV
electrons but is not easily discernable with {\it Chandra} against background
emission.
While our efficient acceleration model with compressed $B$ fits quite well, our
uncompressed model (dotted curve) clearly does not fit, nor does a test-particle\ model (not
shown), as is clear from examining the bottom panel of Fig.~\ref{Ia_LOS}. As far as
we can tell, our results are in complete agreement with those of \citet{BKV2003}
\citep[see also][]{Ballet2005} and provide convincing evidence for highly compressed
magnetic fields and efficient DSA.
\subsection{Adiabatic and synchrotron\ losses and the offset of radio and X-ray peaks}
Nonthermal X-ray emission in a fixed energy band is very sensitive to both adiabatic
and radiative losses. For typical SNR parameters, synchrotron\ X-rays are produced in large
part by the exponential tail of the electron distribution. Therefore, any energy loss
results in a large drop in emissivity. This contrasts with the adiabatic losses of
the electrons producing radio emission.
Since radio is produced by lower energy electrons in the power law portion of the
distribution rather than the exponential part, emission in a fixed energy band is
less sensitive to adiabatic losses. If nonlinear effects from efficient DSA are
important, the fixed band radio is even less effected by adiabatic losses since the
portion of the electron distribution producing radio is likely to be concave, i.e.,
flattering with increasing energy.
The synchrotron\ loss rate will be greater if the magnetic field is compressed at the shock
and, therefore, will depend on the acceleration efficiency. As we show in
\citet{CassamEtal2005} and in Fig.~\ref{Ia_II_inj} here, the morphology of
the X-ray emission near the FS varies noticeably with $\etainj$, peaking more
strongly as the acceleration efficiency increases since electrons lose energy before
convecting far downstream. This feature provides an important diagnostic for
acceleration efficiency.
A direct consequence of X-ray emitting electrons suffering more losses than radio
emitting ones, is an offset in the peak emission of the projected flux at the FS. As
shown in Fig.~\ref{Ia_LOS_FS}, the radio emission peaks well within the X-ray
emission.
The separation will depend on the acceleration efficiency since, for a given set of
supernova parameters, models with efficient DSA have larger compression ratios and
larger downstream magnetic fields. The larger the field, the sharper is the drop in
X-ray emission behind the shock, and the closer to the FS position with be the peak
X-ray emission.
\section{CONCLUSIONS}
We have presented a detailed discussion of the influence of efficient diffusive shock
acceleration on the radial profiles of synchrotron\ emission in young SNRs.
The evidence that collisionless shocks, in general, can accelerate particles with
high efficiency is convincing. There are direct spacecraft observations confirming
it \citep[e.g.,][]{Eich81,BE87,EMP90,BOEF97,Terasawa99}, plasma simulations show
efficient acceleration consistent with spacecraft observations
\citep[e.g.,][]{STK92,EGBS93,GBSEB97}, Galactic cosmic-ray energetics and composition
suggest it \citep[e.g.,][]{AxfordICRC82,EDM97}, and theoretical models certainly
allow it \citep[e.g.,][]{ALS77,Drury83,EE84,JE91,BEK96,MD2001,KJG2002,Blasi2002}.
An unresolved question, of course, is whether or not shock acceleration is efficient
in SNRs.
If DSA is as efficient in accelerating {\it ions} as suggested, the acceleration
process will be nonlinear and will noticeably modify the SNR structure and
evolution. We have shown for typical type Ia and type II SN parameters that these
structural changes, most important of which is the increased shock compression,
produce clear signatures in the synchrotron\ radiation emitted by {\it electrons}.
We note, incidentally, that signatures in the thermal emission may also be present
since the energy which goes into relativistic\ ions comes out of the bulk thermal plasma and
produces a drastic reduction in the temperature of the shocked gas
\citep[e.g.,][]{DEB2000,HRD2000,EDB2004}.
Of course, our assertion that the nonlinear\ effects seen in the structure of SNRs are
evidence for the efficient acceleration of ions rather than electrons depends on how
the energy of shock accelerated particles is distributed between electrons and ions.
While no definitive theory exists describing this partition, the source of the energy
going into superthermal particles is the bulk kinetic energy of the converging
upstream and downstream plasmas.
Diffusive shock acceleration occurs, at its most basic level,
when particles diffuse across the shock and scatter nearly
elastically off the converging plasmas on either side of the shock. When particles
are accelerated from the thermal background, this process favors heavy
particles and it is generally assumed that shocks put far more energy into ions than
electrons.
There is direct evidence for this disparity in acceleration efficiency at the low
Mach number shocks which have been studied in the heliosphere
\citep[e.g.,][]{Feldman85,Terasawa99} \citep[see also][]{EllisonEtalWorkshop94}, but
there is no direct evidence, one way or the other, in the much stronger shocks which
exist outside of the heliosphere. Nevertheless, with some confidence, we believe the
structural changes we have discussed are produced by ion acceleration with the
radiating electrons being passive markers of the effect.\footnote{We note that
so-called shock surfing has been suggested by a number of workers as an effective way
of transferring shock energy into electrons \citep[see][for example, and references
therein]{HoshinoSurf2002}. A thorough discussion of this mechanism is beyond the
scope of this paper, but we note that while some descriptions of this effect show
large energy gains by electrons, nonlinear effects are almost certain to limit the
effectiveness of this process \citep[see][]{SSM2003}, particularly in the strong
shocks we envision for young SNRs.}
While direct evidence for the production of CR ions in SNRs would be the observation
of a pion-decay\ spectral feature in GeV--TeV $\gamma$-rays, such $\gamma$-rays\ are difficult to
detect with the significance necessary to distinguish a pion-decay\ feature from inverse-Compton\ or
bremsstrahlung\ radiation. Furthermore, in low density regions, inverse-Compton\ may outshine pion-decay\
emission, leaving the question of CR ion production for these SNRs open regardless of
the sensitivity of $\gamma$-ray\ telescopes.
{The best chance of seeing a strong pion-decay\ signal is when a SNR interacts with a
dense medium such as the synchrotron-dominated SNR RX J1713.7-3946 (G347.3-0.5)\ interacting with molecular clouds
\citep[see][and references therein]{cad04b}. HESS (High Energy Stereoscopic System)
has recently measured, with high significance, the 1--10 TeV energy spectrum in this
remnant \citep[][]{AharonianNature2004} and in SNR RX J0852.0-4622\
\citep[][]{AharonianVela2005} and while pion-decay\ is certainly the most likely emission
mechanism, it is not possible, based on TeV emission alone, to reliably determine
the different $\gamma$-ray\ components in these spectra.
It should now be possible to test for pion-decay\ emission using the morphology since HESS
has, for the first time, produced $\gamma$-ray\ images of these remnants, and the
morphology of inverse-Compton\ and pion-decay\ should be quite different. }
Observations in the MeV range by GLAST should help significantly to distinguish
pion-decay\ from lepton emission and may provide incontrovertible evidence for or against
SNRs as the source of CRs ions.
{We have emphasized here that another }
signature of efficient cosmic-ray ion production
is the large reduction in the ratio of the radius of the forward shock to the radius
of the contact discontinuity, $R_\mathrm{FS}/R_\mathrm{CD}$. If a large fraction of the shock energy goes into relativistic\
particles and high-energy particles that escape from the shock system, $\Rtot \gg 4$
and the interaction region between the CD and FS will be denser and $R_\mathrm{FS}/R_\mathrm{CD}$ will be
smaller than with inefficient acceleration (Figs.~\ref{Ia_stack},
\ref{II_stack_10yr}, and \ref{Ia_II_inj}). This effect may explain observations of
$R_\mathrm{FS}/R_\mathrm{CD} \sim 1$ in Tycho's and Kepler's SNRs. Type II SNe\ with pre-SN winds may
experience efficient DSA yet still show large $R_\mathrm{FS}/R_\mathrm{CD} \sim 1.3$--$1.4$, consistent
with observations of Cas A and 1E0102.2-7219 (Figs.~\ref{Ia_II_inj} and \ref{Mdot}).
While complicating factors such as an irregular ambient medium, dense knots, thin
sheets of emission, etc., exist in all SNRs, efficient DSA offers a natural
explanation for this important aspect of SNR morphology. Just as important, a
large value of $R_\mathrm{FS}/R_\mathrm{CD}$ observed in a type Ia SNR is strong evidence against
efficient DSA.
{Yet another }
sign of efficient DSA is the presence of short scale
heights seen in nonthermal X-ray emission. Short scale heights are predicted with
efficient DSA because the shock will strongly compress the downstream magnetic field
and synchrotron\ losses will lower the emissivity immediately behind the FS. This results in
several related morphological effects.
First, thin sheets of X-ray emission (e.g., Fig.~\ref{Ia_II_inj}) should be common at
the FS, as is consistent with observations.
Second, projection effects should result in the distinct separation of the radio and
X-ray peaks (e.g., Fig.~\ref{Ia_LOS_FS}), also commonly observed.
Finally, as we show in
Fig.~\ref{Bamba}, the short scale heights seen in SN 1006
\citep[e.g.,][]{BambaEtal2003}, are most naturally explained as sharply peaked
emission behind the FS seen in projection \citep[][have already concluded this for SN
1006]{BKV2003}.
The actual upstream precursor has a long scale length, as expected for TeV electrons,
but is weak enough to avoid detection.
Supernova remnant SN 1006 seems a clear case where the efficient production of CR
ions is taking place, but remnants such as Tycho's and Kepler's, with $R_\mathrm{FS}/R_\mathrm{CD} \sim
1$, are also likely candidates.
The presence of a significant population of CR ions in young SNRs produces effects
that are readily observable in radiation produced by electrons and we have made
predictions, capable of being tested with {\it Chandra} and {\it
XMM-Newton}, to test this assertion.
\acknowledgments
The authors are grateful to A.~\textsc{Decourchelle} and J.~\textsc{Ballet} for the
discussions preceding this paper. D.C.E. wishes to acknowledge the International
Space Science Institute (ISSI) in Bern, Switzerland for hosting a series of workshops
where some of the work presented here was done, as well as support from a NSF grant
(INT-0128883) and a NASA grant (ATP02-0042-0006).
\bibliographystyle{aa}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,237
|
Q: Type trait for trivial types I would like to have a type trait that returns true for any type which does not need memory to be initialised before being used and whose copy constructor can be implemented as a memcpy.
I want it to return true for
*
*integer types (char, short int, int, long int, etc)
*floating point number types (float, double)
*il::array (il::array is my own implementation of std::array) for T being one of int, double, il::array, etc.
and false for thing such as std::vector, and any object that needs something at construction (most objects).
std::is_pod seems to be quite close to what I want as it also returns true for std::array, but unfortunately it does not return true for my own il::array. Is there any way to "teach" is_pod that my il::array behaves as plain old data, or an easy way to roll out my own type trait?
For information, here is my implementation of il::array:
namespace il {
template <typename T, int n>
class array {
private:
T data_[n > 0 ? n : 1];
public:
array(const T& value)
: array() {
for (int k = 0; k < n; ++k) {
data_[k] = value;
}
}
array(std::initializer_list<T> list) {
IL_ASSERT(n == static_cast<int>(list.size()));
for (int k = 0; k < n; ++k) {
data_[k] = *(list.begin() + k);
}
}
const T& operator[](int k) const {
IL_ASSERT(static_cast<unsigned int>(k) < static_cast<unsigned int>(n));
return data_[k];
}
T& operator[](int k) {
IL_ASSERT(static_cast<unsigned int>(k) < static_cast<unsigned int>(n));
return data_[k];
}
T* data() {
return data_;
}
int size() const {
return n;
}
};
}
A:
I would like to have a type trait that returns true for any type which does not need memory to be initialised before being used and whose copy constructor can be implemented as a memcpy.
You're describing a trivial type. You can check for that with std::is_trivial.
std::is_pod seems to be quite close to what I want
That also requires the type to have standard layout, which places restrictions on how and where its data members are declared.
unfortunately it does not return true for my own il::array
Perhaps that's not standard layout, in which case is_trivial should work for you. Or perhaps it's not actually trivial in any case; in which case, you might want to fix it so that it is.
UPDATE: It has a user-declared default constructor, which makes it non-trivial. Since it does nothing but check the value of a compile-time constant, you could replace it with a static_assert; or change n to a more sensible unsigned type like std::size_t to remove the need for the sanity check.
But you'll still need to declare it as defaulted
array() = default;
otherwise the presence of the other constructors will delete it.
Is there any way to "teach" is_pod that my il::array behaves as plain old data?
You could write your own trait, with a specialisation for your type. But that would be weird; if your type is supposed to be trivial or POD, then make it so.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 447
|
\section{Introduction}
Understanding the original formation and the subsequent evolution of the galaxies we observe today remains one of the major open questions in modern astrophysics. Within the past decade, there has been incredible progress in the realization of high-resolution numerical simulations that are now starting to reproduce in detail the physical properties of the galaxies observed at redshift $z = 0$ (e.g., \citealt{mez03}; \citealt{naa07}; \citealt{tis10}). $N$-body cosmological simulations have predicted that in an expanding universe cold dark-matter particles collapse into gravitationally bound, self-similar halos with a diverging inner density profile (e.g., \citealt{nav96}; \citealt{moo99}). The values of the three-dimensional logarithmic slope $\gamma = \mathrm{d} \ln \rho / \mathrm{d} \ln r$ of the collapsed dark-matter halos have been found to be approximately equal to $-1$ and $-3$, respectively, in the innermost and outermost regions. It is in these halos that the stars of the observed galaxies were assembled. Recent hydrodynamical simulations have shown that several mechanisms associated to baryonic physics affect the stellar mass assembly of a galaxy (e.g., dissipationless accretion of stars originally formed far from a galaxy center and dissipational gas flowing towards the inner regions of a galaxy, later transformed into stars). The complex interplay between the luminous and dark components can alter significantly the dark-matter distribution in the center of a halo, making it steeper or shallower, depending on the role played by the different physical processes (for more details, see e.g. \citealt{lac10}).
In the last two decades strong gravitational lensing combined with stellar dynamics and/or stellar population synthesis models has been extremely successful in measuring the amount and distribution of dark matter (\citealt{gri08c,gri10a,gri11}; \citealt{bar07,bar09}; \citealt{aug09,aug10a}; \citealt{fad10}), the presence of dark-matter substructure (e.g., \citealt{veg09,veg10}), and the sizes of dark-matter halos (e.g., \citealt{suy10a}; \citealt{ric10}; \citealt{don11}) in early-type galaxies beyond the local Universe. The combination of these mass diagnostics has also allowed to find alternative ways to address some interesting astrophysical and cosmological topics, such as the determination of the stellar initial mass function (IMF) (e.g., \citealt{gri08a,gri09,gri10b}; \citealt{tre10}; \citealt{aug10b}; \citealt{spi11}; \citealt{son11}) and of the values of the cosmological parameters (e.g., \citealt{gri08b}; \citealt{par09}; \citealt{sch10}; \citealt{suy10b}).
The Sloan Lens ACS (SLACS) survey has been crucial for the identification of a statistically significant sample of strong gravitational lensing systems. Disparate studies (e.g., \citealt{tre06}; \citealt{bol06}; \citealt{gri09}; \citealt{aug09}) have shown that the SLACS lens galaxies are a representative sample of the parent sample of massive early-type galaxies observed in the Sloan Digital Sky Survey (SDSS). By modeling the strong gravitational features detected in these lensing systems, it has been possible to obtain accurate and precise total mass estimates projected within the corresponding Einstein radii (\citealt{tre06}; \citealt{koo06}; \citealt{bol08a}; \citealt{aug09}). Here, we exploit the fact that the Einstein radius of a lensing system is not a length scale intrinsic to the lens (since it depends also on the redshift of the source) to study the average inner dark-matter density distribution of a specific lens sample. We do this by combining the lens aperture total and luminous mass measurements. The pioneering work of \citet{rus03} prefigures to some extent the general method and results presented here. In this previous analysis a self-similar mass model for early-type galaxies was constrained by using aperture mass-radius relations from 22 gravitational lenses. The total mass distribution of the lens galaxies was described in terms of a two-component (luminous and dark matter) model parametrized by (1) a present-day normalization value of the $B$-band stellar mass-to-light ratio, (2) the dependence of a galaxy $B$-band stellar mass-to-light ratio on its luminosity, (3) the projected dark over total mass fraction within two effective radii and (4) the three-dimensional logarithmic density slope of the dark-matter profile.
This Letter is organized as follows. In Sect. 2, we introduce the sample of massive early-type lens galaxies. In Sect. 3, we describe the method and hypotheses used to determine the inner slope of the average galaxy dark-matter density profile. In Sect. 4, we illustrate the main results of this analysis. In Sect. 5, we compare our results with those of previous studies and anticipate future prospects. In Sec. 6, we draw conclusions. In the following, we assume $H_{0}=70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m}=0.3$, and $\Omega_{\Lambda}=0.7$.
\section{The sample}
\begin{table*}
\centering
\caption{Physical properties of the early-type lens galaxies of the sample.}
\begin{tabular}{cccccc}
\hline\hline \noalign{\smallskip}
$z_{\mathrm{sp}}$ & $R_{e}$ & $R_{\mathrm{Ein}}$ & $\sigma_{0}$ & $M_{L}$ & $M_{T}(<R_{\mathrm{Ein}})$ \\
& (kpc) & (kpc) & (km s$^{-1}$) & ($10^{10}$ $M_{\odot}$) & ($10^{10}$ $M_{\odot})$ \\
\noalign{\smallskip} \hline
0.06-0.32 & 3.2-16 & 1.3-7.0 & 200-320 & 7.4-56 & 3.9-47 \\
\noalign{\smallskip} \hline
\end{tabular}
\begin{list}{}{}
\item[Notes --]Ranges of values of the spectroscopic redshift $z_{\mathrm{sp}}$, effective radius $R_{e}$, Einstein radius $R_{\mathrm{Ein}}$, central stellar velocity dispersion $\sigma_{0}$, luminous mass $M_{L}$ (assuming a constant Chabrier stellar IMF), and total mass projected within the Einstein radius $M_{T}(<R_{\mathrm{Ein}})$.
\item[References --]SDSS and MPA/JHU public catalogs; \citet{aug09}; \citet{gri10c}.
\end{list}
\label{ta02}
\end{table*}
In this work, we concentrate on 39 massive early-type lens galaxies discovered in the SLACS survey and studied in several papers (e.g., \citealt{bol08a}; \citealt{gri09}; \citealt{aug10a}). In detail, we conservatively consider only those galaxies that satisfy the photometric and spectroscopic selection criteria of the sample analyzed in \citet{gri10c} (i.e., values of the SDSS \textsf{fracDeV} morphological index larger than 0.95 in the $r$, $i$, and $z$ bands; SDSS spectroscopic redshifts $z_{\mathrm{sp}}$ between 0.05 and 0.33; SDSS aperture stellar velocity dispersions between 150 and 400 km s$^{-1}$; total luminous masses between $10^{10.5}$ ans $10^{12}$ $M_{\odot}$). These galaxies have both accurate total $M_{T}$ and luminous $M_{L}$ mass estimates obtained from, respectively, strong lensing (\citealt{aug09}) and spectral energy distribution (SED) fitting models (from the public galaxy catalogs provided by the MPA/JHU collaboration\footnote{http://www.mpa-garching.mpg.de/SDSS/}). The physical properties of the galaxies in the sample are summarized in Table \ref{ta02}. This is a specific sample of early-type galaxies with large values of central stellar velocity dispersion $\sigma_{0}$ (more details on the measurements of the physical quantities can be found in \citealt{gri10c}). Therefore, the results of the analysis performed in this letter should not be simplistically generalized to early-type galaxies with different physical properties until verified by larger samples.
\section{The method}
For the lenses in the sample, we measure here the values of the dark-matter mass density projected within the Einstein radius and study, in a statistical way, their dependence on the projected distance from the lens centers.
In practice, we proceed as follows. For each lens galaxy, we define an adimensional radius $\Lambda$ as the ratio between the Einstein radius $R_{\mathrm{Ein}}$ and the effective radius $R_{e}$:
\begin{equation}
\Lambda := \frac{R_{\mathrm{Ein}}}{R_{e}} \, .
\label{eq:01}
\end{equation}
The Einstein radius of a lens galaxy depends on its total mass distribution, but also on the redshift of the lensed source. Thus, $R_{\mathrm{Ein}}$ is not a fundamental property of a galaxy and we use $\Lambda$ instead to quantify the distance from the center of a lens. The latter is a scale-free distance that is obtained by normalizing the value of the Einstein radius to the typical scale of a galaxy luminous mass distribution (i.e., $R_{e}$).
Similarly, we estimate the value of the dark-matter mass projected inside the cylinder with radius equal to the Einstein radius as the difference between the values of the total $M_T(<R_{\mathrm{Ein}})$ and luminous $M_L(<R_{\mathrm{Ein}})$ masses and rescale the result to the total amount of luminous mass $M_L$ of each galaxy. We notice that the measurements of the projected masses within $R_{\mathrm{Ein}}$ are robust and almost model-independent for the total ones and only scaled according to the fraction of total light of a de Vaucouleurs profile for the luminous ones (see \citealt{gri09,gri10c}). Then, we define an adimensional dark-matter projected mass density $\Psi$ as the ratio between the adimensional value of the dark-matter projected mass and the area of the disk with radius equal to the value of $\Lambda$:
\begin{equation}
\Psi := \frac{M_T(<R_{\mathrm{Ein}})-M_L(<R_{\mathrm{Ein}})}{M_L} \frac{1}{\pi \Lambda^{2}} \, .
\label{eq:02}
\end{equation}
In this way, both $\Lambda$ and $\Psi$ are referred to the luminous properties of the galaxies in the sample and can thus be properly compared.\footnote{In passing, we notice that differently from \citet{rus03} the luminous mass values of the lens galaxies are measured here from the multi-band photometric and spectroscopic observables and are not scaled according to the galaxy $B$-band luminosity values.}
Next, we measure the value of the Kendall rank correlation coefficient $\varrho$ (for its definition, see \citealt{sal06}) and check if the values of $\Lambda$ and $\Psi$ are correlated at a statistically significant level. In the case of a significant correlation, we perform a Markov chain Monte Carlo study on the galaxy sample to characterize the joint probability distribution function of the values of the two coefficients $\alpha$ and $\beta$ that are used to fit a power-law relation to our set of data:
\begin{equation}
\Psi = \alpha \times (\Lambda)^{\beta} \, .
\label{eq:03}
\end{equation}
We apply this method to the sample described in Sect. 2, starting from different hypotheses. First, we assume a constant \citet{cha03} (labeled as Ch) stellar IMF to estimate the luminous mass values of all the galaxies in the sample. Then, we rescale the galaxy luminous mass values to a constant heavier, \citet{sal55}-like (labeled as Sa) stellar IMF by simply multiplying the Chabrier luminous mass values by a constant factor equal to 1.7. Next, we consider the case of a non-universal stellar IMF and mimic a variation, moving from a lighter to a heavier IMF (labeled as Ch $\rightarrow$ Sa), depending on the values of the galaxy central stellar velocity dispersion. This is motivated by the facts that stellar velocity dispersion is currently considered the most significant parameter related to the stellar population properties of a galaxy (e.g., \citealt{gra09}) and that a stellar IMF variation with stellar velocity dispersion has been tentatively detected by \citet{tre10}. In detail, following the previous indications, we use a toy model in which we multiply the Chabrier luminous mass values with a factor that increases linearly from 1.0 to 1.5 as the value of $\sigma_{0}$ changes from 200 to 320 km s$^{-1}$.
We conclude by remarking that the correlation of the errors on $\Lambda$ and $\Psi$ is not significant and therefore will not affect our results on the steepness of the average dark-matter density profile. Although obtained from the same sets of observational quantities, the uncertainties on $\Lambda$ are very small (the median relative error is smaller than 4\%) and mainly related to the quality of the photometric measurements, while the uncertainties on $\Psi$ are considerably large (the median relative error is approximately 40\%) and primarily driven by the degeneracies that are inherent in the population synthesis modeling.
\section{Results}
\begin{table*}
\centering
\caption{Correlations and power-law fits of $\Lambda$ and $\Psi$.}
\begin{tabular}{cccc}
\hline\hline \noalign{\smallskip}
& $\varrho(\Lambda,\Psi)$ & $\beta_{\mathrm{best}}$ & $\beta_{68\%}$$\,$$_{\mathrm{CL}}$ \\
\noalign{\smallskip} \hline
Ch & $-0.57$ $(<0.01)$ & $-$1.04 & $[-1.26,-0.78]$ \\
Sa & $-0.24$ $(<0.03)$ & $-$0.77 & $[-1.14,-0.15]$ \\
Ch $\rightarrow$ Sa & $-0.52$ $(<0.01)$ & $-$1.28 & $[-1.54,-0.93]$ \\
\noalign{\smallskip} \hline
\end{tabular}
\begin{list}{}{}
\item[Notes --]Values of the Kendall rank correlation coefficient $\varrho$ between $\Lambda$ and $\Psi$ (in parentheses, the probability that an equal number of measurements of two uncorrelated variables would give values of the coefficient higher than the measured ones), and of the best-fitting $\beta_{\mathrm{best}}$ and 68\% CL interval $\beta_{68\%}$$\,$$_{\mathrm{CL}}$ of the inner slope of the average dark-matter projected mass density.
\end{list}
\label{ta01}
\end{table*}
We summarize in Table \ref{ta01} the values of the Kendall rank correlation coefficient $\varrho$ between $\Lambda$ and $\Psi$ and remark that using the three different hypotheses mentioned above about the stellar IMF of the sample galaxies always results in an anti-correlation of the values of $\Lambda$ and $\Psi$ at a statistical significance level higher than 97\%. In the same table, we also show the best-fitting (minimum chi-square) $\beta_{\mathrm{best}}$ and the 68\% CL interval $\beta_{68\%}$$\,$$_{\mathrm{CL}}$ values of the average inner slope of the dark-matter projected mass density. These numbers are obtained from Monte Carlo chains with $5 \times 10^{5}$ points for each of the three cases. The data set and the best-fitting power-law for the case of a constant Chabrier stellar IMF are illustrated in Fig. \ref{fi01} and the marginalized probability distribution functions of $\beta$ for the three cases are plotted in Fig. \ref{fi02}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.49\textwidth]{fig2.ps}
\caption{The adimensional values of the dark-matter projected mass density within the Einstein radius, $\Psi$, and Einstein radius, $\Lambda$. The points, with their 1 $\sigma$ error bars, are obtained by using the values of the total luminous mass and effective radius of the galaxies as dimensional scales and assuming a constant Chabrier stellar IMF. The best-fitting power-law is shown in gray.}
\label{fi01}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.49\textwidth]{fig1a.ps}
\caption{Probability distribution functions of the average logarithmic inner slope, $\beta$, of the dark-matter projected mass density. The different histograms refer to the different assumptions on the stellar IMF discussed in the text. As a reference, the arrow close to the $x$-axis shows the result that a three-dimensional spherical density distribution decreasing as $1/r^{2}$ (i.e., an isothermal profile) would give. Larger and smaller values of $\beta$ correspond, respectively, to shallower and steeper profiles with respect to an isothermal one.}
\label{fi02}
\end{figure}
From Table \ref{ta01} and Figs. \ref{fi01} and \ref{fi02}, we notice that assuming a constant Chabrier stellar IMF leads to an average dark-matter density profile that considered in terms of a three-dimensional spherical profile decreases in the inner regions approximately as $1/r^{2}$, i.e. like an isothermal profile (as usually referred to in lensing studies). A constant heavier stellar IMF results in a broader probability distribution function for $\beta$, centered on a slightly larger value. This result can be qualitatively explained in the following way. If we keep the values of the total mass fixed and increase those of the luminous mass, we obtain values of $\Psi$ that are on average smaller and decrease less steeply with increasing values of $\Lambda$ than in the previous case (see Fig. \ref{fi01}). This translates into a dark-matter density profile that is shallower than an isothermal one in the center. On the contrary, the proposed variation in the stellar IMF provides a steeper profile of the dark-matter component in the inner regions. This result can also be understood looking at Fig. \ref{fi01}. As expected, the values of $\sigma_{0}$ are positively correlated with those of $\Lambda$. This follows from the fact that more massive galaxies yield, on average, larger Einstein radii. Therefore, varying the stellar IMF from a Chabrier to a Salpeter-like, the points in Fig. \ref{fi01} with small values of $\Lambda$ have approximately the same values of $\Psi$ (because of the unchanged Chabrier stellar IMF), while those with large values of $\Lambda$ have now larger luminous mass values (because of the changed, heavier stellar IMF), hence, in general smaller values of $\Psi$. The net effect is an increase in the value of the slope $\beta$.
We notice that the dark-matter universal profile obtained from dark matter-only cosmological simulations (\citealt{nav96}) is characterized by values of $\beta$ of approximately $-0.1$ and $-0.2$ within 0.1\% and 1\% the value of the typical dark-matter length scale ($r_{s}$), respectively.
\section{Discussion}
We compare here our results with those of several other studies on early-type galaxies and indicate a possible way to extend this work.
Based on a sample of 16 massive Coma galaxies, with physical properties very similar to those of the galaxies in our sample, \citet{tho11} find that if the stellar IMF is universal and \citet{kro01}-like, i.e. very similar to a Chabrier IMF, then the galaxy dark-matter density profiles are smooth and on average close to isothermal out to several tens of kiloparsecs (see Fig. 6 in the cited paper). This conclusion follows from joint dynamical (Schwarzschild's orbit superposition) and stellar population models that exploit accurate photometric and spectroscopic data. Recalling that the luminous mass estimates obtained by assuming a Chabrier or a Kroupa stellar IMF are only slightly different, the findings of our study on the value of $\beta$ in the 'Ch' case are consistent with those of the analysis performed in the Coma cluster.
\citet{nap10} consider a sample of 335 local early-type galaxies and estimate their luminous and total masses from, respectively, photometric SED fitting and dynamical Jeans modeling. They also adopt a Kroupa stellar IMF and conclude that the average three-dimensional logarithmic slope $\gamma$ of the dark-matter density profile at $R_{e}$ ranges between $-2.1$ and $-1.7$. In the simplified case of a spherical power-law density profile, the values of $\beta$ and $\gamma$ are related in projection in the following way: $\beta = \gamma + 1$ (if $\gamma$ is different from $-1$). Our estimates of $\beta$ in the 'Ch' case are therefore consistent with the results of \citet{nap10}.
In the gravitational lensing study by \citet{rus03}, detailed in Sect. 1, the authors come to the conclusion that it is not possible to measure precisely the slope of the dark-matter component because of the significant degeneracies between the parameters. Despite that, models with a dark-matter density profile that is approximately isothermal are generally preferred to models with shallower dark-matter density distributions (e.g., with $\gamma = -1$). Our results confirm these last findings.
In the last few years, some new observational constraints have been obtained on the stellar IMF of massive early-type galaxies. For this specific class of galaxies, if the IMF is constant, a Salpeter-like IMF is favored by the data (e.g., \citealt{gri08a,gri09,gri10b}; \citealt{tre10}; \citealt{aug10b}; \citealt{spi11}). In the case of a constant Salpeter IMF, our results indicate a dark-matter density profile that is shallower than an isothermal one in the central regions. Interestingly, the study of \citet{tre04} on the average inner power-law slope of the dark-matter halos of 5 early-type lens galaxies at $z_{\mathrm{sp}} \approx 0.5-1.0$ provides also indication of profiles shallower than isothermal ones. The slope values are robustly determined by combining gravitational lensing and stellar dynamics, with and without priors on the lens stellar mass-to-light ratios from the Fundamental Plane. More recently, \citet{son11} have also performed a two-component lensing and dynamics analysis to decompose the total mass distribution of the double Einstein ring gravitational lens in terms of a bulge of stars and a dark-matter halo. They find that a Salpeter IMF is preferred to a Chabrier IMF for the stellar component and that the value of the three-dimensional logarithmic inner slope $\gamma$ of the dark-matter halo is $-1.7 \pm 0.2$. Therefore, our findings in the 'Sa' case are in general good agreement with these lensing and dynamics analyses.
The results of the two combined strong lensing, stellar dynamics, and stellar population studies by \citet{tre10} and \citet{car11} on samples of more than 50 SLACS lens galaxies agree on finding that a constant heavy (Salpeter-like) stellar IMF requires a shallower dark-matter density profile than a constant light (Chabrier-like) stellar IMF. Furthermore, based on different samples of SLACS lenses and mass diagnostics, \citet{jia07} and \citet{aug10b} conclude that adiabatically compressed models of the galaxy dark-matter halos are favored. These findings are also in qualitative agreement with our results.
The next natural step towards a clearer picture of the internal structure of massive early-type galaxies will be the extension of the SLACS sample to the lens galaxies selected from the BOSS (Baryon Oscillation Spectroscopic Survey; \citealt{eis11}) Emission-Line Lens Survey (BELLS; \citealt{bro12}). The lens galaxies for which strong lensing and stellar population models will be available at the end of this new survey will allow to enlarge significantly the lens sample (in nearly the same luminous mass range) and to explore the average density profile of the galaxy dark-matter halos on a radial range (i.e., $\Lambda$) that is approximately twice as large as done here.
\section{Conclusions}
We have combined strong gravitational lensing and stellar population synthesis models in a homogeneous sample of massive early-type galaxies to measure the logarithmic inner slope of the average dark-matter density profile. We have obtained clear indication of the contraction of the halos when compared to the results of dark matter-only cosmological simulations. This is in line with the recent findings of high-resolution hydrodynamical simulations which include radiative cooling and feedback processes (e.g., \citealt{aba10}; \citealt{tis10}; \citealt{duf10}). These studies show that the contraction of a halo does depend not only on the amount and distribution of the baryonic mass condensed at the halo centre, but also on the details of the halo assembly history. Future theoretical and observational efforts towards a better understanding of the inner dark-matter structure and the stellar initial mass function of galaxies will therefore be crucial to explore different cosmological models and to investigate the nature of dark matter and its interaction with baryons.
\acknowledgments
C. G. is grateful to Marco Lombardi, Giuseppe Bertin, Matteo Barnab\`e, and Simona Vegetti for interesting discussions. This research was supported by the DFG cluster of excellence ``Origin and Structure of the Universe''.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,760
|
{"url":"https:\/\/includestdio.com\/7048.html","text":"# python \u2013 What is the difference between re.search and re.match?\n\n## The Question :\n\n555 people think this question is useful\n\nWhat is the difference between the search() and match() functions in the Python re module?\n\nI\u2019ve read the documentation (current documentation), but I never seem to remember it. I keep having to look it up and re-learn it. I\u2019m hoping that someone will answer it clearly with examples so that (perhaps) it will stick in my head. Or at least I\u2019ll have a better place to return with my question and it will take less time to re-learn it.\n\n540 people think this answer is useful\n\nre.match is anchored at the beginning of the string. That has nothing to do with newlines, so it is not the same as using ^ in the pattern.\n\nAs the re.match documentation says:\n\nIf zero or more characters at the beginning of string match the regular expression pattern, return a corresponding MatchObject instance. Return None if the string does not match the pattern; note that this is different from a zero-length match.\n\nNote: If you want to locate a match anywhere in string, use search() instead.\n\nre.search searches the entire string, as the documentation says:\n\nScan through string looking for a location where the regular expression pattern produces a match, and return a corresponding MatchObject instance. Return None if no position in the string matches the pattern; note that this is different from finding a zero-length match at some point in the string.\n\nSo if you need to match at the beginning of the string, or to match the entire string use match. It is faster. Otherwise use search.\n\nThe documentation has a specific section for match vs. search that also covers multiline strings:\n\nPython offers two different primitive operations based on regular expressions: match checks for a match only at the beginning of the string, while search checks for a match anywhere in the string (this is what Perl does by default).\n\nNote that match may differ from search even when using a regular expression beginning with '^': '^' matches only at the start of the string, or in MULTILINE mode also immediately following a newline. The \u201cmatch\u201d operation succeeds only if the pattern matches at the start of the string regardless of mode, or at the starting position given by the optional pos argument regardless of whether a newline precedes it.\n\nNow, enough talk. Time to see some example code:\n\n# example code:\nstring_with_newlines = \"\"\"something\nsomeotherthing\"\"\"\n\nimport re\n\nprint re.match('some', string_with_newlines) # matches\nprint re.match('someother',\nstring_with_newlines) # won't match\nprint re.match('^someother', string_with_newlines,\nre.MULTILINE) # also won't match\nprint re.search('someother',\nstring_with_newlines) # finds something\nprint re.search('^someother', string_with_newlines,\nre.MULTILINE) # also finds something\n\nm = re.compile('thing$', re.MULTILINE) print m.match(string_with_newlines) # no match print m.match(string_with_newlines, pos=4) # matches print m.search(string_with_newlines, re.MULTILINE) # also matches ## The Answer 2 107 people think this answer is useful search \u21d2 find something anywhere in the string and return a match object. match \u21d2 find something at the beginning of the string and return a match object. ## The Answer 3 53 people think this answer is useful match is much faster than search, so instead of doing regex.search(\u201cword\u201d) you can do regex.match((.*?)word(.*?)) and gain tons of performance if you are working with millions of samples. This comment from @ivan_bilan under the accepted answer above got me thinking if such hack is actually speeding anything up, so let\u2019s find out how many tons of performance you will really gain. I prepared the following test suite: import random import re import string import time LENGTH = 10 LIST_SIZE = 1000000 def generate_word(): word = [random.choice(string.ascii_lowercase) for _ in range(LENGTH)] word = ''.join(word) return word wordlist = [generate_word() for _ in range(LIST_SIZE)] start = time.time() [re.search('python', word) for word in wordlist] print('search:', time.time() - start) start = time.time() [re.match('(.*?)python(.*?)', word) for word in wordlist] print('match:', time.time() - start) I made 10 measurements (1M, 2M, \u2026, 10M words) which gave me the following plot: The resulting lines are surprisingly (actually not that surprisingly) straight. And the search function is (slightly) faster given this specific pattern combination. The moral of this test: Avoid overoptimizing your code. ## The Answer 4 50 people think this answer is useful re.search searches for the pattern throughout the string, whereas re.match does not search the pattern; if it does not, it has no other choice than to match it at start of the string. ## The Answer 5 33 people think this answer is useful You can refer the below example to understand the working of re.match and re.search a = \"123abc\" t = re.match(\"[a-z]+\",a) t = re.search(\"[a-z]+\",a) re.match will return none, but re.search will return abc. ## The Answer 6 31 people think this answer is useful The difference is, re.match() misleads anyone accustomed to Perl, grep, or sed regular expression matching, and re.search() does not. \ud83d\ude42 More soberly, As John D. Cook remarks, re.match() \u201cbehaves as if every pattern has ^ prepended.\u201d In other words, re.match('pattern') equals re.search('^pattern'). So it anchors a pattern\u2019s left side. But it also doesn\u2019t anchor a pattern\u2019s right side: that still requires a terminating $.\n\nFrankly given the above, I think re.match() should be deprecated. I would be interested to know reasons it should be retained.\n\n15 people think this answer is useful\n\nre.match attempts to match a pattern at the beginning of the string. re.search attempts to match the pattern throughout the string until it finds a match.\n\n7 people think this answer is useful\n\nMuch shorter:\n\n\u2022 search scans through the whole string.\n\n\u2022 match scans only the beginning of the string.\n\nFollowing Ex says it:\n\n>>> a = \"123abc\"\n>>> re.match(\"[a-z]+\",a)\nNone\n>>> re.search(\"[a-z]+\",a)\nabc","date":"2021-01-20 16:52:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.18379315733909607, \"perplexity\": 1424.7724339559743}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703521139.30\/warc\/CC-MAIN-20210120151257-20210120181257-00017.warc.gz\"}"}
| null | null |
"The original music is composed by Szymon Brzoska and Ruben Lebaniegos and enhances the cultural mix on stage with its assorted elements of flamenco, classical, medieval and Arabic music. It is performed live and perfectly complements the choreography."
In Dunas, Sidi Larbi Cherkaoui returns with another of his remarkable cross-cultural collaborations, this time developed with flamenco superstar Maria Pages. We travelled all together to the Moroccan desert to draw inspiration from its endlessly shifting landscapes and sandy dunes. It was very exciting to team up with original flamenco musicians and start a dialogue between my own musical language, Spanish rhythms of the flamenco footwork and beautiful voice of El-Arabi Serghini, who features in Lament, one of the pieces I wrote for Dunas.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 795
|
Missing the Editorial Boat Redux
Filed under: Editorial Matters,Professional Editors — Rich Adin @ 8:49 am
Tags: book packagers, copyediting, editor, Freelance Editorial Services, pricing, professional editor, professional editors, publishers, publishing business model, wordsnSync
I received several private comments regarding my Missing the Editorial Boat article, with all demonstrating that the primary point is being missed.
What is it that packagers offer American publishers? They offer (1) complete (or near complete) production services at a price that is less than what it would cost the American publisher to do the same work in-house and (2) convenience. It appears that readers grasped the first concept but not the second, yet it is the second that is the most important for editorial freelancers.
Traditionally, an in-house production editor would have x number of books that he or she would have to shepherd through the production process in a year. As the publishing industry consolidated in the 1990s, the in-house production editor's workload increased. Instead of having to occasionally hire a freelance editor, for example, hiring freelance editors became the norm, a necessity even — yet the in-house production editor had to monitor each hired freelancer's work. What happened is that the role played by the in-house editor changed from editing to managing.
This workload increased greatly as the years passed and the demand for more profit by the parent company had to be met. It reached a point where the in-house production editor could no longer manage all of the titles for which he or she needed to be responsible in order to meet the corporate bottom-line goals, in the sense that the production editor could no longer properly manage all of the individual freelancers needed to be hired to get the work done. In addition, freelancer costs were rising.
The solution was the packager who offered to undertake the management burden as well as the production burden at a price that was often less than the publisher's current costs. The packager's lower cost came about in two ways: first, by moving the mechanical production outside the United States to developing countries where costs were significantly lower. And second, by putting the burden of meeting that lower cost on the freelancer; after all, the packager's in-house costs, although less than that of the publishers it dealt with, was/is still a fixed cost. The cost of the freelancer, however, was/is a flexible cost.
Conversations with publishers tell me that the packager situation is less than ideal and that quality of output has declined, but there is no viable alternative for the publisher. Publishers are still being squeezed between costs and profit demands, so they are trying to publish more books with fewer in-house staff. And it certainly is less than ideal for editorial freelancers who get price squeezed. But the convenience factor, when added to the lower bid price of the packager, makes packaging a sensible choice for publishers. Take away the convenience factor, and the packager is not necessarily the best alternative.
Just so it is clear, the convenience factor is the convenience of having a third-party manage all of the freelancers the publisher needs to get the books edited. Packagers have undertaken the role of the in-house production editor in this regard, and now, when a publisher sends a book or several books to a packager, the publisher only needs to speak with one person even if there are 15 freelance editors working on the publisher's books. This is convenience, as well as a lower cost to publishers.
The idea behind partnering is to level the playing field as regards convenience. There still needs to be price competition, but that is another matter. To get to that point, freelancers first need to overcome the hurdle of convenience.
Think about the editorial boat article in that light.
Can I Publish This Photograph of the Mona Lisa?
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,282
|
England Men
Jonathan Trott holds up India in first Test at Lord's
By Sam SheringhamBBC Sport at Lord's
Last updated on 21 July 201121 July 2011 .From the section Cricket
First Test, Lord's (day one):
England 127-2 v India
Mahendra Dhoni and Rahul Dravid fail to hang onto an edge from Jonathan Trott
England rode their luck to reach 127-2 after being put into bat in testing conditions on a rain-hit opening day of the first Test against India at Lord's.
Jonathan Trott was dropped by Rahul Dravid on eight and edged between wicketkeeper and first slip on his way to 58 not out before the weather ended proceedings.
To make matters worse for India, left-arm seamer Zaheer Khan - who had earlier removed England openers Alastair Cook and Andrew Strauss - was forced off the field with what appeared to be a hamstring problem.
Any serious injury to the tourists' number one seam bowler would dramatically increase England's chances of winning the four-match series. If Strauss's men can triumph by two clear Tests they will usurp India as the number one ranked Test side.
India captain Mahendra Singh Dhoni had no hesitation in choosing to bowl first as the 2,000th Test match in history got underway with grey clouds looming over the home of cricket.
Geoffrey BoycottFormer England batsman and BBC summariser
I don't think it was a bowl first pitch. You have to be sure you can bowl a team out in a day and India didn't look like doing that. England's batsmen got a bit lucky with some of their shots, but they are sitting pretty only two wickets down
The decision was immediately justified as Praveen Kumar - a 24-year-old in his fourth Test - beat Cook's bat three times in the opening over.
Zaheer generated plenty of movement from the Nursery End and claimed the first wicket when an inswinger trapped Cook leg before wicket for 12, the first time the Essex opener had been dismissed for fewer than 55 since December.
Captain Strauss, who escaped being run out for two when Ishant Sharma's throw missed the stumps, looked nervous and tentative as he sought to end a run of low Test scores.
But after prodding his way to 22 off 83 balls, he was tempted by a short ball from Zaheer soon after lunch and top-edged an attempted hook to Sharma at fine leg.
Trott was totally bamboozled by off-spinner Harbhajan's first delivery of the series, the ball clipping the edge of his bat and glancing off Dravid's outstretched right hand.
The Warwickshire man had reached 32 when a snorter from Zaheer ripped off the seam, took the edge and flew between Dhoni and Dravid.
The ever-reliable Trott reaches his half century at Lord's
Zaheer left the field midway through his next over after clutching his hamstring during his follow-through.
In his absence, Trott and Kevin Pietersen took their partnership to 65, the latter playing and missing several times and looking jumpy at the crease.
Attempting to make his presence felt, he tried to smash Harbhajan out of the ground and was fortunate that Sharma misjudged the catch at long-on.
With an improved forecast for the remainder of the match, England will be looking to post an imposing total before unleashing their trio of seamers - with Stuart Broad preferred to Tim Bresnan - on India's star-studded batting line-up.
Listen to Jonathan Agnew and Geoff Boycott's review of each day's play on the TMS Podcast page.
The Ashes: Australia's lead passes 250 despite Wood-inspired England fightback - clips, radio & text
Wood completes five-wicket haul by dismissing Starc
'It's tight!' - is this a no-ball from Woakes?
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,011
|
\section{Introduction}
\label{preface}
Recently, contributing to the 2009 annual survey of the EDGE
foundation \cite{edge} entitled "What will change everything" \cite{sejnowski94:_comput_brain}, T.
Sejnowski foresees that the computers will be the "microscopes of
the future" and, specifically, that are computers that "have made
it possible to localize single molecules with nanometer precision
and image the extraordinary complex molecular organization inside
cells". In his contribution Sejnowski was recognizing the
importance of automatic and computerized control of laser beam and
image analysis in modern optical microscopy. However, more importantly, computers
can be also considered as "future
microscopes" for the capability of simulating at the atomic scale
the behavior of matter and biological systems. To reach
this goal, which was even unthinkable up to some decades
ago, it is necessary to develop software able
to simulate the newtonian dynamics of a large number of atoms
(order $10^{12}$, the "number" of atoms in a cell) for long enough
time (order microsecond, or $10^{10}$ time steps) \cite{rapaport04:_art_of_molec_dynam_simul,marx09:_ab_initio_molec_dynam,frenkel01:_under_molec_simul_secon_edition}. The state-of-the art supercomputers, on the hardware side, and the parallel
simulation codes, on the software one, are nowadays on the way to
get this result. However, before facing the ultimate problem of simulating the complexity of
life (micro-)organisms, one should validate and optimize codes that
simulate hard (physical and chemical) systems. Even if smaller than a micro-organism,
these systems encompass problems which are extremely demanding in terms of computational resources, with e.g., simulation boxes containing pico-molar quantity of matter and with characteristic times of $0.1$ microseconds. Molecular Dynamics (MD) \cite{rapaport04:_art_of_molec_dynam_simul} is expected to be the key to efficiently solve these type of problems.
As each generation of computers is introduced, in fact, larger and longer
simulations are allowed to be run, thus producing better results
which answer a variety of new scientific questions. The latest
generation of terascale and petascale supercomputers (see e.g.,
\cite{httpnscc,httpornl,httpjuge}), in particular, holds the promise to enable
the development of more realistic and complex interactions, as
well as the study of systems made by a very large number ($\approx 10^{10}$) of
particles. In the most common paradigm used today, called
massively parallel processing, the typical size of computer
clusters ranges from several thousand to hundreds of thousands of
processing core units. In these architectures, a high degree of
parallelization is essential to efficiently utilize the
computational power available. While in principle a high level
parallelization strategy works fine for small to medium size
supercomputer clusters, tuning for specific architectures can be
the key to achieve huge scaling performance. The design concepts
of today's processors, in fact, are markedly different from one
system to another, and it is necessary to prepare codes having
specific architectures in mind in order to optimize both the speed
and the bandwidth of memory access, which is typically slow if
compared to the processor's frequency.\\
With reference to molecular dynamics, several methods have been
published in the past which incorporate different degrees of
parallelism \cite{rapaport04:_art_of_molec_dynam_simul}. To date,
MD scaling has been demonstrated up to ten thousand of cores, with
a speed of about 7 iterations per second for an ensemble of 1
billion particles running on 65536 cores of a BlueGene/L
\cite{kadau06:_molec_dynam_comes_of_age}. For several applications
of molecular dynamics, such as the study of structural glasses
\cite{elliot83:_physic_of_amorp_mater} where a typical simulation
requires $10^{6-7}$ time steps, this speedup is still too small
to perform practical calculations. In the study of glasses, and in
the more general area of amorphous materials, molecular dynamics
simulations are extremely important as they are able to glimpse
the system dynamics at spatial scales between $1-100$ nm, a range
that is completely inaccessible with experimental apparatus
\cite{sette98:_dynam_of_glass_and_glass,shintani08:_univer_link_between_boson_peak}.
However, due to such speed bottleneck problems, present MD studies
on glass have been limited to a number of particles (10 million
\cite{monacoa09:_anomal_proper_of_acous_excit}) unable to have
simulation boxes large enough to capture the interesting
phenomenology at the micron-scale. Therefore, if on one hand it is
necessary to improve the scaling of general MD codes at least two
orders of magnitude, in order to effectively use the computing
power available, on the other it is also important to focus on
specific research fields, such as the context of amorphous
materials, which are now asking for new levels of performances.\\
In this article we report the results of our recently developed
Billions Body Molecular Dynamics ($\mathsf{BBMD}$) package, and
demonstrate its effectiveness in the study (as case study) of
structural glasses by analyzing the glass formation of an
exceptionally-large particles system. The $\mathsf{BBMD}$ code was
able to scale on the whole $294912$ cores of the BlueGene/P
system at the J\"ulich Supercomputing Centre, which constitutes
the world's largest supercomputer available characterized by 72
racks of an IBM BlueGene/P. In such an extreme scaling test, the
$\mathsf{BBMD}$ code showed an efficiency of about 90\%, and a
overall speed of 2 seconds x iteration with 100 billion particles.
These results pave the way to the study of very large systems. In
order to demonstrate the applicability of our code to the field of
amorphous materials, we performed a controlled temperature MD
simulation of a system made by 1 billion particles, and studied
the supercooled dynamics of the liquid state by varying the
temperature in the range $T\in[2,10^{-4}]$. This simulation has
been performed on the Shaheen supercomputer, hosted at KAUST
University and constituted by 16 racks of an IBM BlueGene/P. This
paper is organized as follows. We begin our analysis by discussing
the structure of the $\mathsf{BBMD}$ code (Sec. \ref{code}),
reviewing both parallelizations and optimization strategies. In
Sec. \ref{scaling}, we describe $\mathsf{BBMD}$ scaling results
obtained at the J\"ulich supercomputing center. We report the code
speedup for molecular systems of different size, ranging from 1
billion to 100 billion particles. Communication workload versus
calculation execution time is also studied. In Sec. \ref{glassy},
finally, we investigate the glassy dynamics of a 1 billion
particles system made by a binary mixture of soft-spheres.
\begin{figure}
\includegraphics[width=12 cm]{f1.eps}
\caption{\label{f1} Main structure of the $\mathsf{BBMD}$
parallel code: (right) the simulation domain is spatially
decomposed into rectangular boxes, each defining a single
MPI process (center) that, in turn, is constituted by
several cubic MD cells of side $r_c$ (left). Each MD cell
contains a bidirectional list with all particle informations:
position $\mathbf{x}$, velocity $\mathbf{v}$, acceleration
$\mathbf{a}$, mass $m$ and species $s$. }
\end{figure}
\section{The $\mathsf{BBMD}$ code}
\label{code} The $\mathsf{BBMD}$ code is a highly optimized,
parallel C++ MD code for Lennard-Jones particles systems, designed
to scale on machines characterized by hundreds of thousands of
processors, such as the latest generation of IBM BlueGene/P
supercomputers. Much effort was taken to balance design simplicity
and code speed, while optimizing at the same time both memory
requirements and cache efficiency. In the following, we briefly
describe the main structure of the code.
\subsection{Interaction potential}
$\mathsf{BBMD}$ is originally designed to support two main classes
of short range interaction potentials:\\
\begin{itemize}
\item The Lennard-Jones potential
\begin{equation}
\label{lj0}
V(r_{ij})=4\epsilon_{\alpha\beta}\bigg[\bigg(
\frac{\sigma_{\alpha\beta}}{r_{ij}} \bigg)^{12}-\bigg(
\frac{\sigma_{\alpha\beta}}{r_{ij}} \bigg)^{6} \bigg]
\end{equation}
\item The soft-sphere potential
\begin{equation}
\label{sc0}
V(r_{ij})=4\epsilon_{\alpha\beta}\bigg( \frac{\sigma_{\alpha\beta}}{r_{ij}} \bigg)^{12}
\end{equation}
\end{itemize}
being $\sigma$ and $\epsilon$ two tensors that parametrize the
potentials, and $r_{ij}$ the distance between particles $i,j$ of
species $\alpha$ and $\beta$, respectively. Since Eqs.
(\ref{lj0})-(\ref{sc0}) converge rapidly to zero beyond
$r_{ij}\approx ||\sigma||$, it is wasteful to consider an
interaction between two particles at a long distance. A standard
choice in MD is therefore to truncate the potentials beyond the
distance $r_c$, in order to increase calculation's speed. To avoid
any problem due to the discontinuity of $V(r)$ at $r=r_c$, we
replace Eqs. (\ref{lj0})-(\ref{sc0}) with the following potential:
\begin{equation}
\label{mod0}
V^*(r)=\bigg[V(r)-V(r_c)-(r-r_c)\frac{dV(r)}{r}|_{r=r_c}\bigg][1-\Theta(r_c)]
\end{equation}
with $\Theta(x)$ being the Heaviside function. The modification
(\ref{mod0}) applies across the entire interaction range, and the
overall spatial domain is decomposed into MD cubic cells of $r_c
\times r_c \times r_c$ volume (Fig. \ref{f1}).
\subsection{Grid-search method}
To perform the time evolution of the system, forces between
particles have to be calculated and particle pairs whose distance
is below the cutoff range $r_c$ have to be found. For this task,
we adopt an $O(N)$ linked-list method, with an inexclusive
\cite{rapaport04:_art_of_molec_dynam_simul} grid that allows more
than one particle to occupy a single cell. Newton's third law is
then applied in order to halve the number of neighbors to be
checked in the calculation of the forces. In order to guarantee
optimal memory efficiency, we developed a bidirectional list
structure that contains all the information of the particles:
position $\mathbf{x}$, velocity $\mathbf{v}$, acceleration
$\mathbf{a}$, mass $\mathbf{m}$ and species index $\mathbf{s}$
(Fig. \ref{f1}). The double pointer mechanism allowed an efficient
implementation of memory-related operations (e.g., movement of a
particles on a different cell or on a different processor) without
deleting or creating memory locations but just by moving pointers,
which is very fast. Another advantage of such memory structure is
that it automatically links together particles which are close in
space, thus improving speed efficiency in all search operations.
To maintain memory requirement as low as possible, we do not
employ any bookkeeping method, such as the neighboring particle
list \cite{rapaport04:_art_of_molec_dynam_simul}, which are too
memory demanding for billion sized particle systems.
\subsection{Parallelization}
\subsubsection{Parallelization scheme}
$\mathsf{BBMD}$ has been parallelized with the domain
decomposition strategy, also known as spatial decomposition, where
the simulation box is divided into subdomains and each domain is
assigned to each processor (Fig. \ref{f1}). The Message Passing
Interface (MPI) standard is then employed to handle all parallel
communications among processes. This type of parallelization has
the advantage to require only nearest-neighbor communications,
with a few limited global collective operations, and it is
therefore well suited for very large supercomputer clusters such
as the IBM BlueGene series.
\subsubsection{Parallel force calculation}
\label{pfc} $\mathsf{BBMD}$ employs the velocity-verlet time
marching algorithm \cite{rapaport04:_art_of_molec_dynam_simul},
which is a standard in MD algorithms due to its robustness and
accuracy in maintaining conserved quantities, such as the energy
and the momentum. In this evolution scheme, the hot spot of the
algorithm is the computation of the forces exerted among
particles. To perform this calculation, two parallel operations
are required: (i) moving particles that stray from the originally
associated process and (ii) exchanging particles that belong to
borders crossing different processors. In $\mathsf{BBMD}$,
particular care has been taken in the design of (i) and (ii) in
order to overlap communications and calculations to the maximum
extent possible, while optimizing speed and cache efficiency. More
specifically, the parallel communication starts with the operation
(i) and then proceeds with (ii). In both steps, a one dimensional
array containing particles properties (i.e., $\mathbf{x}$,
$\mathbf{v}$, $\mathbf{a}$, $m$, $s$) needs to be constructed, and
the number of particles to be sent to neighboring processors has
to be calculated. In $\mathsf{BBMD}$, these two operations are
overlapped with $\mathsf{send/recv}$ MPI communication routines in
order to minimize communication time with respect to calculation.
The task (i) begins by tagging all the particles that belongs to
different processors, and sorting them on-the-fly thus minimizing
access times in subsequent MPI communications. This is done by
exploiting the bidirectional nature of the particle list. In
particular, each particle that needs to be sent to a different
process is first moved on the tail of its cell list with an $O(1)$
operation, and then tagged by inverting its mass sign. Such
tagging procedure avoids the use of an external index and
increases the memory efficiency of the code. Besides that, this
method automatically groups tagged particles together, and allows
to access them sequentially with $O(1)$ operations when MPI
$\mathsf{send/recv}$ communications are performed. Once the task
(i) has been completed, the exchanging of the nearest-neighbor is
done with the standard ghost-cell approach
\cite{marx09:_ab_initio_molec_dynam}. In both (i) and (ii) tasks,
one single MPI process is required to communicate with his 26
neighbors. Although being characterized by nearest neighbors
communication only, a naive implementation of this algorithm
requires 26 different communications and results in poor scaling
performances when a large number of processor is employed.
However, by taking advantage of the domain decomposition strategy
employed in $\mathsf{BBMD}$, it is possible to reduce the number
of total communications to just 6. This is achieved by properly
enlarging the communication window, and in particular by
exchanging part of the ghost cells during each MPI communication
(see e.g., \cite{frenkel01:_under_molec_simul_secon_edition} for
more details).
\begin{figure}
\centering
\includegraphics[width=9 cm]{f2.eps}
\caption{\label{f2} $\mathsf{BBMD}$ strong scaling results: code speed versus number of processors.}
\end{figure}
\subsubsection{BlueGene specific optimizations}
In the calculation of the $\mathsf{sqrt}$ function, required by the computation of Eq. (\ref{mod0}), we employ the square root reciprocal BlueGene function $\mathsf{frsqrte}$, coupled with two Newton-Rapson iterations. More specifically, we replaced the code segment:
\begin{verbatim}
rij=sqrt(r2);
\end{verbatim}
with:
\begin{verbatim}
rij = frsqrte(r2);
rij = ((0.5 * rij) * (3.0 - r2 * (rij * rij)));
rij = ((0.5 * rij) * (3.0 - r2 * (rij * rij)));
rij = rij * r2;
\end{verbatim}
This optimization results in a speed increment of about 7\%.
\section{$\mathsf{BBMD}$ scaling results}
\label{scaling} The evaluation of $\mathsf{BBMD}$ code
performances has been carried out on the Jugene system at the
J\"ulich Supercomputing Center \cite{httpjuge}, which is composed
of $294912$ cores (or 72 racks) of an IBM BlueGene/P with total
peak performance of 1 PFlops. The test suite consisted of a series
of canonical molecular dynamics simulations of a $20:80$ binary
mixture of soft-spheres with the following parameters (here all
the units are to be intended as normalized MD units
\cite{frenkel01:_under_molec_simul_secon_edition}):
$m_i=1$, $\sigma_{11}=1$, $\sigma_{12}=0.8$, $\sigma_{22}=0.88$, $\epsilon_{11}=1$, $\epsilon_{12}=1.5$, $\epsilon_{22}=0.5$. In the canonical evolution, the temperature $T$ has been fixed to $T=0.5$, and the density $\rho$ to $\rho=1.2$ in order to maintain the system in the liquid state with all the particles randomly fluctuating among different processors. Figure \ref{f2} displays $\mathsf{BBMD}$ strong scaling results for systems characterized by $1,10$ and 100 billion particles. Although the memory footprint of $\mathsf{BBMD}$ permits the number of particles to be increased by up to two orders of magnitude, we concentrated on a range where the code speed was sufficiently fast to implement realistic calculations. In the smallest case of a system with 1 billion of particles, $\mathsf{BBMD}$ was able to scale well beyond 10 racks and obtained and efficiency of 68\% on 16 racks when compared to a single rack. When the number of particles was increased from 1 billion to 10 billion, 70\% of efficiency was achieved between 1 rack and 64 racks. This number was improved even further for systems containing 100 billion particles. In this configuration, $\mathsf{BBMD}$ reached an efficiency of 89\% on 72 racks (or 294912 cores) when compared to 8 racks. \\
\begin{figure}
\centering
\includegraphics[width=9 cm]{f3.eps}
\caption{\label{f3} $\mathsf{BBMD}$ weak scaling results: code speed versus number of particles.}
\end{figure}
To investigate the weak scaling performances of the code, we
perform a series of runs with a fixed number of cores and with
system sizes varying between $N=1$ billion and $N=100$ billion.
Figure \ref{f3} shows the result of such analysis. When the
particles are increased up to three orders of magnitude, the
$\mathsf{BBMD}$ code shows a perfect linear $O(N)$ complexity. The
improved scaling is due to the proportion of computation with
respect to communication time, which appreciably increases when
the number of particles grow (Fig. \ref{f4}).
\begin{figure}
\centering
\includegraphics[width=9 cm]{f4.eps}
\caption{\label{f4} $\mathsf{BBMD}$ computation versus communication workload for system with increasing particles number N. The measurements have been performed with simulations of 1000 time steps on 16 racks.}
\end{figure}
As seen in Fig. \ref{f4}, the communication impact over computation
is practically negligible and the overall execution time is dominated
by calculations. This is not only the result of the three dimensional
spatial domain decomposition employed, which significantly reduces the
volume of each MPI process and the number of elements to be exchanged
during each iteration, but also relies on the specific overlapping
strategy used in the parallel calculation of the forces (Section \ref{pfc}),
which minimizes $\mathsf{MPI\_ Waitall}$ times.\\
In the case of a system with 1 billion of particles, we measured a
code speed of 0.14 seconds x iteration, which is sufficiently
small to enable the study of billion-body structural glasses. Such
a system size is two orders of magnitude higher than the largest
glass simulation ever reported, and will improve up to two orders
of magnitude the resolution of MD measurements on glassy dynamics
\cite{monacoa09:_anomal_proper_of_acous_excit}.
\section{A glass transition in a 1 billion binary mixture of soft spheres}
\label{glassy}
\subsection{A few words about the glass phase}
When the temperature of a liquid is reduced below the melting
temperature, two possible physical phenomena may occur: either the
system undergoes \emph{crystallization} -a first order phase
transition- where the final ordered configuration is the
thermodynamically stable phase, or else the liquid may become
\emph{supercooled} and get dynamically arrested into a disordered
solid represented by a glass. Despite a large scientific
production, the phenomenology of the glass state is still far from
being completely understood, and many aspects are yet unknown. As
an example, it is not yet clear whether the dynamical arrest is a
genuine thermodynamic phase transition, a kinetic phase transition
er something else, as the real world counterpart of a phase
transition taking place in the trajectories' phase space as
recently suggested by Chandler and co-workers \cite{Chandler}. In
the last decades, in particular, several different theories have
been suggested and various diverse analysis have been pursued
\cite{sette98:_dynam_of_glass_and_glass,shintani08:_univer_link_between_boson_peak,
angell95:_format_of_glass_from_liquid_and_biopol,tarjus01:_ammin_and_rheol,debenedetti01:_super_liquid_and_glass_trans,cavagna09:_super_liquid_for_pedes}.\\
Notwithstanding the wide diversity of views, a consensus exists
over the consideration that a glass transition is a dynamical
crossover through which a viscous liquid falls out from
equilibrium and becomes solid on the experimental time scale. Such
a process is manifested in a gradual change in the slope of the
volume (or other extensive thermodynamic variables such as the
entropy or the entalphy) at a specific temperature $T_g$, which
defines the \emph{glass-transition} temperature. A fundamental
property of glasses is the existence of a high degree of
\emph{frustration} in their ground state energy configuration
\cite{MPVBook}. Frustration, in turn, is manifested by the
existence a huge number of minima of equivalent energy (metastable
states). When the temperature decreases below a specific
threshold, the energy barriers of the various minima becomes
sufficiently large to trap the dynamics in the configuration space
and let it explore only a subset of the available iso-energy
surface. The result of this dynamical arrest is a disordered solid
that defines the glass phase
\cite{cavagna09:_super_liquid_for_pedes}.
\begin{figure}
\centering
\includegraphics[width=8 cm]{f5.eps}
\caption{\label{f5} Temperature $T$ curve as a function of the time step employed in the study of the liquid-glass transition in a 1 billion particles system.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=9 cm]{f6.eps}
\caption{\label{f6} Caloric curves $U(T)$ obtained from MD simulations (circle markers) and relative least square fit (solid and dashed lines). Figure (b) is an enlarged section of (a) near the glass temperature $T_g$.}
\end{figure}
\subsection{Sample setup and simulation results}
It is worth to mention that on cooling a liquid, if
crystallization is avoided, the atomic dynamics is characterized
by a relaxation time (naively, one can thing to the typical time
needed to change the inherent "configuration") that increase
following an Arrhenius law in strong glass forming systems, and
even faster in fragile glass forming systems \cite{Angell,
Ruocco}, reaching the value of 100 s at the glass transition
temperature. A supercooled liquid above, but close to, the glass
transition temperature can be considered at equilibrium only if
its atomic dynamics is investigated for log time, longer that the
relaxation time, a macroscopic time. As a consequence, during a MD
run, necessarily lasting for time much shorter than the relaxation
times close to the glass transition, one falls out of equilibrium
as soon as the temperature became that temperature where the
relaxation time correspond to the simulation times. As a
consequence, in a MD simulation the temperature where the system
becomes trapped in a specific inherent structure is largely higher
than the real glass transition temperature. In a molecular
dynamics simulation, a liquid-glass transition can be observed as
a continuous transition of the extensive thermodynamic parameter
as energy or specific volume.\\
To study this transition, we employed the same binary mixture used
for our benchmark suite in Section \ref{scaling}: the high density
$\rho=1.2$ and the $20:80$ highly asymmetric mixture
configuration, in fact, neglects the existence of a well defined
solid phase for the system and the low temperature ground state
becomes highly frustrated
\cite{angelani04:_saddl_and_softn_in_simpl_model_liquid}. We
therefore expect the observation of a glass transition in the
dynamics as soon as the temperature is decreased to a small enough
value. Figure \ref{f5} shows the temperature curve employed for
our MD simulation. At the beginning, we rapidly increase the
system temperature to a high value $T=3$. In this state, the two
particles species have sufficient kinetic energy to freely diffuse
over the whole simulation box. After $200000$ time steps, the
temperature starts to decrease by following a slowly-varying
linear curve, whose duration is of $3\cdot 10^6$ time steps. In
order to guarantee stability and energy conservation over the
whole simulation, we adopt a time $t$ resolution of $\delta
t=10^{-3}$ s. The cutoff range has been set to $r_c=1.2$, which
guarantees a sufficiently large interaction distance to observe
the formation of a supercooled liquid.
\begin{figure}
\centering
\includegraphics[width=7 cm]{f7.eps}
\caption{\label{f7} A $16000$ particles portion of the resulting glass at $T=10^{-3}$. The black particles are of species $1$, the gray belongs to species $2$.}
\end{figure}
Figure \ref{f6} displays the caloric curve of the system, and
shows the behavior of the potential energy $U$ versus the
temperature $T$. For temperature values well above $T_g=0.2$, the
system is in the liquid state and follows a continuous power law
curve with $T=T^\frac{3}{5}$, as found from a nonlinear least
square fit procedure applied on the energy $U$ and as expected
from the theory of Tarazona \cite{Tarazona}. As we progressively
reduce the temperature beyond the value $T_g$, we observe the
appearance of a continuous transition, characterized by a
radically change in the derivative of $U$ with respect to $T$. For
temperature values below $T_g$, which defines the glass
temperature \cite{elliot83:_physic_of_amorp_mater}, the energy $U$
varies linearly with the temperature and the system gets trapped
into an arrested phase with all the particle oscillating almost
harmonically around their equilibrium configuration. Figure
\ref{f7} displays a $16000$ particles portion of the billion-body
solid formed at $T=10^{-3}$. As expected from the thermodynamic
analysis based on the caloric curve, the solid configuration
reached is that of a structural glass, whose particles are
randomly arranged in space. The structural properties of the glass
are analyzed by calculating the radial distribution function
$g(r)$, defined as:
\begin{equation}
g(r)=\frac{2V}{N^2}\bigg\langle \sum_{i<j}\delta(r-r_{ij})\bigg\rangle,
\end{equation}
being $V$ the sample volume, $N$ the total number of particles,
$r_{ij}$ the distance between particles $i$ and $j$ and $<...>$
denoting an ensemble average. The radial distribution function
describes the spherically averaged local organization around a
specific atom, and it measures the probability of finding an atom
at a distance $r$ from a given particle. Figure \ref{f8} displays
the $g(r)$ for the billion-body glass obtained at $T=0.001$. The
glass is characterized by a sharp peak at $r\approx 1$, which
yields the average minimum interparticle distance at equilibrium,
followed by two broader peaks and a series of oscillations of
decreasing amplitude around the asymptotic value
$g(r\rightarrow\infty)=1$. The presence of well defined broad
peaks in the radial distribution function is the hallmark of a
structurally disordered phase, with particles oscillating around
randomly arranged spatial sites. This dynamics is smeared out as
long as the interparticle distance increases, and all correlations
are dramatically reduced after $r\approx 4$, thus indicating the
lack of any long range positional order in the system.
The $g(r)$ reported in Fig. \ref{f8} compares favorably with
previous determination in soft sphere systems.
\begin{figure}
\centering
\includegraphics[width=5 cm]{f8.eps}
\caption{\label{f8} The radial distribution function $g(r)$ of the glass at $T=10^{-3}$.}
\end{figure}
\section{Conclusions}
We have developed a parallel code which demonstrates the
applicability of molecular dynamics techniques to the study of
billions-body structural glasses. We tested our code on the
world's largest supercomputer available, namely the Jugene
BlueGene system at the J\"ulich Supecomputing center, and
demonstrated scalability on the full machine [characterized by
$294912$ computing cores] with an efficiency of $89\%$ in the
largest configuration of $100$ billions of particles. We then
applied our code to the case study of the supercooled dynamics of
an exceptionally large system, constituted by an highly frustrated
binary mixture of one billion of particles. This simulation paves the way to the computational study of
billions-body structural glasses, thus achieving new levels of
resolution in the analysis of anomalous vibration of amorphous
materials and, on broader perspective, of living matter.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 0
|
use File::Spec;
use File::Path;
use File::Find;
use File::stat;
use Getopt::Long;
use strict;
my $LOGIOS_ROOT = File::Spec->rel2abs(File::Spec->updir);
my $RESOURCES = File::Spec->catdir($LOGIOS_ROOT, 'examples', 'Resources');
my $PROJECT = 'MeetingLineDomain';
my $INSTANCE = 'MeetingLine';
my $target = shift;
&clean; #clean by default
exit if $target eq 'clean';
&build;
&build_make_pronunciation;
my $testnum = 0;
my $testpass = 0;
my $testfail = 0;
&test;
&test_make_pronunciation;
print "Press ENTER to exit.$/"; <STDIN>;
exit;
sub wanted_cleaned {
my $filename = $_;
$File::Find::prune = 1 if $filename eq '.svn';
return if ! -f $filename;
foreach ('MeetingLineDomain.forms', 'MeetingLineDomain.gra', 'usernames.class') {
return if $_ eq $filename;
}
unlink $filename;
}
sub clean {
#clean everything that's _NOT_ on the whitelist
print STDERR "Cleaning up Resources$/";
find(\&wanted_cleaned, $RESOURCES);
}
sub build {
print STDERR "Exercising Logios.pm$/";
require File::Spec->catfile($LOGIOS_ROOT, 'scripts', 'Logios.pm');
my $logios = Logios->new('OLYMODE' => 1,
'LOGIOS' => $LOGIOS_ROOT,
'PROJECT' => $PROJECT,
'RESOURCES' => $RESOURCES,
'INSTANCE' => $INSTANCE);
$logios->compile_grammar;
$logios->makelm;
$logios->makedict;
}
sub build_make_pronunciation {
print STDERR "Executing make_pronunciation.pl$/";
chdir('Resources');
my $cmd = "$^X \"".File::Spec->catfile($LOGIOS_ROOT, 'Tools', 'MakeDict',
'make_pronunciation.pl').'"'
." -tools \"".File::Spec->catdir($LOGIOS_ROOT, 'Tools').'"'
." -dictdir \"".File::Spec->catdir($RESOURCES, 'DecoderConfig', 'Dictionary').'"'
." -words $INSTANCE.token"
." -dict $INSTANCE-make_pronunciation.dic";
print "$cmd$/";
system($cmd);
chdir(File::Spec->updir);
}
sub test_make_pronunciation {
print STDERR "Testing make_pronunciation$/";
my $dicfn = File::Spec->catfile($RESOURCES, 'DecoderConfig', 'Dictionary',
"$INSTANCE-make_pronunciation.dic");
return if !&test_results((-e $dicfn && stat($dicfn)->size), "$dicfn is missing or empty");
&test_duquesne($dicfn);
}
sub test {
print STDERR "Testing targets$/";
#check to see if all the files exist
for my $target (map {File::Spec->catfile($RESOURCES, $_)}
(File::Spec->catfile('Grammar', "$INSTANCE.net"),
File::Spec->catfile('Grammar', 'forms'),
File::Spec->catfile('DecoderConfig', 'Dictionary', "$INSTANCE.dict"),
File::Spec->catfile('DecoderConfig', 'LanguageModel', "$INSTANCE.arpa"),
File::Spec->catfile('DecoderConfig', 'LanguageModel', "$INSTANCE.ctl"),
File::Spec->catfile('DecoderConfig', 'LanguageModel', "$INSTANCE.probdef"))) {
&test_results((-e $target && stat($target)->size),
(File::Spec->splitpath($target))[2],
'missing or empty');
}
&test_duquesne(File::Spec->catfile($RESOURCES, 'DecoderConfig', 'Dictionary',
"$INSTANCE.dict"));
}
sub test_duquesne {
open(DICT, shift);
my @duq_entries = grep(/^DUQUESNE:/, <DICT>);
if(&test_results((scalar @duq_entries == 1),
"DUQUESNE entry doesn't exist in dictionary")) {
chop(my $duq_entry = $duq_entries[0]);
my $duq_pron = substr($duq_entry, index($duq_entry, "\t")+1);
&test_results(($duq_pron eq 'D UW K EY N'),
"DUQUESNE is '$duq_pron', should be 'D UW K EY N'");
}
}
sub test_results {
my ($status, @reasons) = @_;
if($status) {
++$testpass;
print sprintf("Test %2d: OK$/", ++$testnum);
} else {
++$testfail;
print sprintf("Test %2d: FAILED: ", ++$testnum), join(': ', @reasons), $/;
}
return $status;
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,580
|
Ка́рлос Вальдерра́ма (; род. 2 сентября 1961 года, Санта-Марта) — колумбийский футболист, рекордсмен сборной Колумбии по количеству сыгранных матчей (111). Дважды признавался футболистом года в Южной Америке. Помимо игровых и лидерских качеств, выделялся на поле также своей причёской. Единственный колумбийский футболист, вошедший в список 125 величайших футболистов ФИФА.
Биография
Карлос Вальдеррама родился в семье профессионального футболиста, Карлоса Вальдеррамы, выступавшего за клуб «Унион Магдалена». Его дядя, Тото Вальдеррама, играл за клуб «Сиклон Бананеро». Его двоюродный брат, Алекс Вальдеррама, также играл в футбол, один раз он даже стал лучшим бомбардиром чемпионата Колумбии.
Большую часть карьеры провёл в колумбийских клубах. В Европе играл за «Монпелье» и «Реал Вальядолид» в конце 1980-х — начале 1990-х годов. Последние годы карьеры провёл в американских клубах MLS. Завершал карьеру в начале 2000-х в «Колорадо Рэпидз» в возрасте свыше 40 лет.
Вальдеррама был известен своим прекрасным видением поля. На 93-й минуте матча против сборной Германии на чемпионате мира 1990 года Вальдеррама отдал голевую передачу Фредди Ринкону, поразившему ворота Бодо Иллгнера, — этот гол принёс ничью сборной Колумбии и позволил ей впервые в истории пробиться во второй раунд чемпионата мира.
Последний мяч за сборную забил 31 мая 1998 года в возрасте 36 лет и 8 месяцев в товарищеском матче в ворота сборной Германии накануне чемпионата мира во Франции.
В родном городе футболиста рядом со входом на стадион ему установлен памятник.
Ушёл из футбола 1 февраля 2004 года, когда состоялся его прощальный матч с участием Диего Марадоны, Энцо Франческоли и Хосе Луиса Чилаверта.
После окончания карьеры футболиста стал тренером. На данный момент он ещё не возглавлял команд самостоятельно, но работает помощником тренера «Хуниора». 1 ноября 2007 года его бурная реакция на судейство спровоцировала беспорядки на матче «Хуниора» с «Америкой» из Кали.
Личная жизнь
Двоюродный брат Карлоса Вальдеррамы, Диди Алекс Вальдеррама, был игроком сборной Колумбии на Кубках Америки 1979 и 1983 годов, обладатель Кубка Либертадорес в составе «Атлетико Насьоналя». Двое родных братьев Карлоса, Алан Вальдеррама и Рональд Вальдеррама, как и ещё один двоюродный брат Мигель Гонсалес Паласио, также были профессиональными футболистами.
Достижения
Командные
«Атлетико Хуниор»
Чемпион Колумбии (2): 1993, 1995
«Монпелье»
Обладатель Кубка Франции: 1989/90
«Тампа Бэй Мьютини»
Победитель регулярного сезона MLS: 1996
Сборная Колумбии
Бронзовый призёр Кубка Америки (3): 1987, 1993, 1995
Личные
Лучший футболист Южной Америки (2): 1987, 1993
Лучший футболист MLS: 1996
Участник чемпионатов мира (3): 1990, 1994, 1998
Включён в список ФИФА 100
Golden Foot: 2013 (в номинации «Легенды футбола»)
Символическая сборная года MLS: 1996
Лучший футболист Кубка Америки: 1987
Рекордсмен по количеству проведённых матчей за сборную Колумбии (111)
Включён в список величайших футболистов XX века по версии World Soccer
Примечания
Ссылки
Профиль на RSSSF
Carlos 'El Pibe' Valderrama (Futbolista)
El «gran capitán»
Футболисты Колумбии
Игроки сборной Колумбии по футболу
Игроки ФК «Унион Магдалена»
Игроки ФК «Мильонариос»
Игроки ФК «Индепендьенте Медельин»
Игроки ФК «Атлетико Хуниор»
Игроки ФК «Монпелье»
Игроки ФК «Реал Вальядолид»
Игроки ФК «Депортиво Кали»
Игроки ФК «Майами Фьюжн»
Игроки ФК «Колорадо Рэпидз»
Футболисты года в Южной Америке
ФИФА 100
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,189
|
{"url":"https:\/\/indico.cern.ch\/event\/181055\/contributions\/308688\/","text":"# Quark Matter 2012\n\n12-18 August 2012\nUS\/Eastern timezone\n\n## Event-by-event mean $p_{\\rm T}$ fluctuations measured by the ALICE experiment at the LHC\n\n16 Aug 2012, 16:00\n2h\n\nPoster Correlations and fluctuations\n\n### Speaker\n\nStefan Thomas Heckel (Johann-Wolfgang-Goethe Univ. (DE))\n\n### Description\n\nResults on event-by-event fluctuations of the mean transverse momentum of charged particles measured by the ALICE experiment at the LHC are compared to different Monte Carlo approaches. For these studies pp collisions at $\\sqrt{s}$~=~0.9, 2.76 and 7~TeV and Pb--Pb collisions at $\\sqrt{s_{\\rm NN}}$~=~2.76~TeV are used. The analysis is performed within $|\\eta| < 0.8$ and $0.15 < p_{\\rm T} < 2$~GeV\/c. The data shows only a small collision energy dependence and indicates a common scaling behaviour with event multiplicity from pp to semi-central Pb--Pb collisions. In central Pb--Pb collisions, the results deviate from this trend, exhibiting a significant reduction of the fluctuation strength. A systematic comparison of ALICE results in pp to PHOJET and different tunes of the PYTHIA6 and PYTHIA8 event generators is presented. The study indicates a sensitivity of the data to different mechanisms to model high-multiplicity pp events. A comparison of Pb--Pb results to HIJING and AMPT suggests a strong relation between transverse momentum fluctuations and collectivity in central events, and disfavors an independent superposition scenario.\n\n### Primary author\n\nCollaboration ALICE (CERN, Geneva, Switzerland)\n\n### Co-author\n\nStefan Thomas Heckel (Johann-Wolfgang-Goethe Univ. (DE))\n\n### Presentation Materials\n\nThere are no materials yet.","date":"2020-08-07 19:01:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6375850439071655, \"perplexity\": 8234.31012233148}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439737206.16\/warc\/CC-MAIN-20200807172851-20200807202851-00374.warc.gz\"}"}
| null | null |
{"url":"https:\/\/math.stackexchange.com\/questions\/3006046\/how-to-find-the-newton-polygon-of-the-polynomial-product-prod-i-1p2","text":"How to find the Newton polygon of the polynomial product $\\ \\prod_{i=1}^{p^2} (1-iX)$\n\nHow to find the Newton polygon of the polynomial product $$\\ \\prod_{i=1}^{p^2} (1-iX)$$ ?\n\nLet $$\\ f(X)=\\prod_{i=1}^{p^2} (1-iX)=(1-X)(1-2X) \\cdots (1-pX) \\cdots (1-p^2X).$$\n\nIf I multiply , then we will get a polynomial of degree $$p^2$$.\n\nBut it is complicated to express it as a polynomial form.\n\nSo it is complicated to calculate the vertices $$(0, ord_p(a_0)), \\ (1, ord_p(a_1)), \\ (2, ord_p(a_2)), \\ \\cdots \\cdots$$\n\nof the above product.\n\nHelp me doing this\n\nIt\u2019s really quite simple. There are $$p^2-p$$ roots $$\\rho$$ with $$v(\\rho)=0$$, $$p-1$$ roots with $$v(\\rho)=-1$$, and one root with $$v(\\rho)=-2$$. Consequently, there is one segment of the polygon with slope $$0$$ and width $$p^2-p$$, one segment with slope $$1$$ and width $$p-1$$, and one segment with slope $$2$$ and width $$1$$.\n\nThus, the vertices are $$(0,0)$$, $$(p^2-p,0)$$, $$(p^2-1,p-1)$$, and $$(p^2,p+1)$$.\n\n\u2022 excellent explanation. I got it \u2013\u00a0M. A. SARKAR Nov 25 '18 at 4:15\n\u2022 Does this result hold for $p=2$ ? Because for $p=2$, we have $f(X)=(1-X)(1-2X)(1-3X)(1-4X)=1-10X+35X^2-50X^3+24X^4$. Thus the vertices are $$(0,0), (1,1), (2,0),(3,1), (4,3)$$. The vertex $(1,1)$ makes disturbance . Would you please do little bit more? \u2013\u00a0M. A. SARKAR Nov 26 '18 at 14:50\n\u2022 No, $(1,1)$ is not a vertex of the Newton polygon. \u2013\u00a0Lubin Nov 26 '18 at 20:59\n\nPartial Answer: regarding the coefficients of the polynomial:\n\nFix one term in the brackets, say $$Y=(1-5X)$$. In order for the coefficient $$5$$ to contribute to $$a_j$$, we have to multiply $$Y$$ with $$j-1$$ other brackets, since this is the only way of getting a power of $$j$$ for $$X$$. This corresponds to choosing a subset $$S \\in \\{1,2,\\ldots,p^{2}\\}$$ of size $$j-1$$ since each term in the product has a unique coefficient for $$X$$ that is in $$\\{1,2,\\ldots,p^{2}\\}$$. This leads to\n\n$$$$a_j=(-1)^{j} \\underset{ S \\subset \\{1,2, \\ldots, p^{2} \\}, \\ |S|=j}{\\sum} \\prod \\limits_{s \\in S} s \\ .$$$$\n\n\u2022 what is the Newton polygon? \u2013\u00a0M. A. SARKAR Nov 20 '18 at 8:09\n\u2022 @M.A. SARKAR What do you mean by $ord_p(a_j)$? I dont know these kind of polynomials but thought an expression for the coefficnets might help \u2013\u00a0sigmatau Nov 20 '18 at 8:13\n\u2022 This is from discrete valuation field like p-adic field and $ord_p$ is a valuation function. If $a_j=\\frac{a}{b}p^n$, where $a,b$ are coprime then $ord_p(a_j)=n$. \u2013\u00a0M. A. SARKAR Nov 20 '18 at 8:21\n\u2022 I see, should I add my answer as a comment, sicne I don't see how to find $ord_{p}(a_j)$ right now. \u2013\u00a0sigmatau Nov 20 '18 at 8:42\n\u2022 or I just leave it as a partial answer. \u2013\u00a0sigmatau Nov 20 '18 at 8:44","date":"2019-05-20 10:44:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 32, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 1, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9525420069694519, \"perplexity\": 313.90048895297366}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232255943.0\/warc\/CC-MAIN-20190520101929-20190520123929-00519.warc.gz\"}"}
| null | null |
\section{Introduction}
Grain boundaries are a basic building block for spatial patterns in extended systems; see for instance~\cite{manneville1990,cross1993,hoyle2006}. They separate regions in physical space, where the fine crystalline structure possesses different orientations. While they are extensively studied in many aspects of material science, they also arise in pattern-forming systems such as Rayleigh-B\'enard convection. Our focus here is on the latter, pattern-forming systems far from thermodynamic equilibrium, although we suspect that many of the methods here can be applied to crystalline patterns in interacting-particle systems, say.
Our motivation is two-fold. First, coherent structures in systems far from equilibrium have been studied quite successfully recently using a spatial-dynamics point of view; see for instance \cite{sandstede2004}. These methods have proven useful not only to establish local existence, but also to classify and study stability, bifurcations, and interactions of coherent structures. This spatial-dynamics perspective has also been used to study existence of grain boundaries close to onset of a pattern-forming instability \cite{haragus2007,haragus2012,scheel2014}. Second, far from onset of pattern formation, qualitative changes in the nature of grain boundaries have been observed and quantified, both theoretically and numerically in \cite{passot1994,ercolani2003,ercolani2009} using phase approximations. Figure \ref{f:stadion} \cite{ercolani2003} shows a direct simulation of the Swift-Hohenberg equation in an ellipsoidal domain, with boundary conditions forcing parallel stripes. Along the major axis, weak bending of stripes is eventually mediated by grain boundaries and defects. As curvature and hence angles of grain boundaries increases inwards, the grain boundaries go through a sequence of qualitative changes that motivated the studies mentioned above and our computations here.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figure1}
\caption{Zoom-in of Figure 8 from Ercolani {\it et al}~\cite{ercolani2003}, which shows a simulation of the Swift-Hohenberg equation \eqref{e:sh}, $\mu=1$, in an ellipsoidal domain after initial transient. One notices the qualitative change of grain boundaries along the horizontal axis, highlighted by the red circle, as the angle between stripes becomes more acute; see Section \ref{s:path} for more details.}\label{f:stadion}
\end{figure}
The main purpose of this paper is also twofold. First, we lay out a systematic numerical strategy for the study of grain boundaries, inspired very much by the spatial dynamics point of view where grain boundaries are heteroclinic orbits. Second, we study grain boundaries in the prototypical example of the Swift-Hohenberg equation numerically. Our approach is based on numerical continuation with ``asymptotic boundary conditions'', at zeroth order. It enables us to cleanly separate the core of the grain boundary from far-field behavior, and thereby allows us to detect bifurcations in a ``thermodynamic limit'' of infinite domain size. More practically, it allows us to construct a well-posed continuation problem with well-conditioned linear operators, uniformly in the domain size. One of our most striking observations concerns the behavior of grain boundaries as the angle between the stripes is decreased towards an acute angle. Decreasing the angle as a continuation parameter, we first locate a parity-breaking super-critical pitchfork bifurcation. The asymmetric branch breaks a parity-shift symmetry and quickly develops into a convex-concave disclination pair. The primary branch later develops two dislocations, and restabilizes shortly after.
The remainder of the introduction recalls basic facts about the Swift-Hohenberg equation, previous results about existence of grain boundaries and defects, and gives a brief outline of the paper.
\paragraph{The Swift-Hohenberg model.}
We study grain boundaries in the Swift-Hohenberg equation,
\begin{equation}\label{e:sh}
u_t=-(\Delta +1)^2 u + \mu u - u^3, \qquad (x,y)\in\mathbb{R}^2,
\end{equation}
as a prototypical model for the formation of striped phases. The trivial state, $u(x,y)\equiv 0$ is linearly unstable against perturbations of the form $\mathrm{e}^{\mathrm{i} (k_x x + k_y y)}$, for $k_x^2+k_y^2\sim 1$, $\mu\gtrsim 0$. Stable solutions in this regime are striped (or roll) solutions $u_\mathrm{s}(kx;k)$, $u_\mathrm{s}(\xi)=u_\mathrm{s}(\xi+2\pi)=u_\mathrm{s}(-\xi)$, which exist for an interval of allowed wavenumbers $k\in (k_\mathrm{min},k_\mathrm{max})$ and are stable for $k\in (k_\mathrm{zz},k_\mathrm{eck})$. Here, $k_\mathrm{eck}=k_\mathrm{eck}(\mu)$ and $k_\mathrm{zz}=k_\mathrm{zz}(\mu)$ denote Eckhaus (instability due to perturbations of wavelength) and zigzag (instability due to transverse perturbations) boundaries, respectively, with leading order expansions
\[
k_\mathrm{min,max}=1\pm\sqrt{\mu/4},\quad k_\mathrm{eck}=1+\sqrt{\mu/12},\quad k_\mathrm{zz}=1-\mu^2/512;
\]
see for instance \cite{cross1993,mielke1997}.
While individual striped solutions are asymptotically stable \cite{schneider1996,uecker1999}, one typically observes patches of stripe solutions with different orientation in a large domain. Indeed, any rotated stripe solution $u_\mathrm{s}(k_x x + k_y y;k)$, $k_x^2+k_y^2=k^2$, is a solution due to rotational symmetry of \eqref{e:sh}. We are interested in situations where two different orientations $\underline{k}^\pm=(k_x^\pm,k_y^\pm)$ are dominant in, say, $x>0$ and $x<0$, respectively, separated by an exponentially localized interfacial region near $x\sim 0$, which we will refer to as a \emph{grain boundary}, thinking of the orientation of stripes as the grain or microstructure in the medium.
Grain boundaries often possess a vertical periodicity. In particular, when $k_y^\pm$ are commensurate, $k_y^-/q_-=k_y^+/q_+$ for some integer $q_\pm$, then stripes at $\pm\infty$ possess a common periodicity $L_y=2\pi/k_y$, $k_y=k_y^\pm/q_\pm$. In this case, one can view grain boundaries as heteroclinic orbits to asymptotic periodic orbits,
\begin{equation}\label{e:gbdef}
\left|u_\mathrm{gb}(x+\xi,y)-u_\mathrm{s}(k_y^\pm y + k_x^\pm x+\xi+\varphi^\pm;k^\pm)\right|_{X_\mathrm{loc}}\to 0,\quad \xi\to \pm\infty,
\end{equation}
where norms could be taken in $X_\mathrm{loc}=H^4([0,1]\times S^1)$, $S^1=\mathbb{R}/2\pi\mathbb{Z}$, in the independent variables $x,y$, and $(k^\pm)^2=(k^\pm_x)^2+(k^\pm_y)^2$.
We refer to such a solution and associated $q_\pm$ as a \emph{$(q_-,q_+)$ grain boundary}; see Figure \ref{f:gbsc}. We also use a convention where the sign of $q_\pm$ indicates positive and negative slope of level sets as graphs over $x$, respectively. Since we can reflect vertically, in $y$, we adopt the convention where $q_->0$.
\begin{figure}[h]
\centering
\includegraphics[width=.8\linewidth]{figure2}
\caption{Schematic figures of grain boundaries, when $(q_-,q_+)$ is (a) $(1,-1)$ (b) $(3,-2)$, (c) $(2,1)$. Note that the $q_\pm$ count the number of stripes encountered in a fixed section $x=\pm L$, $L$ large, $y\in (0,L_y)$.}
\label{f:gbsc}
\end{figure}
\paragraph{Small-amplitude grain boundaries: Normal forms.}
Intuitively, it is not immediately clear that time-independent equilibria of the form \eqref{e:gbdef} actually exist for say the Swift-Hohenberg equation in an idealized unbounded domain. One could easily envision how the curvature along a family of stripes decreases slowly in time until stripes are straight.
Mathematically, the question of existence was answered quite comprehensively in \cite{haragus2007,haragus2012,scheel2014}. There, existence of symmetric grain boundaries, $k_x^-=-k_x^+$, $k_y^-=k_y^+$ was shown for $\mu$ sufficiently small and arbitrary angle $\angle(\underline{k}^-,\underline{k}^+)$. The approach there reformulates the stationary Swift-Hohenberg equation in a strip
\begin{equation}\label{e:shgb}
-(\Delta +1)^2 u + \mu u - u^3=0, \qquad x\in\mathbb{R},\ \ y\in\mathbb{R}/\left(\frac{2\pi}{k_y}\right)\mathbb{Z},
\end{equation}
as an (ill-posed) dynamical system in the $x$-direction, formally writing it as a first-order equation in $x$,
\begin{equation}\label{e:ds}
\frac{dU}{dx} = \mathcal A(\mu,k)U + \mathcal F(U),
\end{equation}
in which
\[
U = \begin{pmatrix}u\\u_1\\v\\v_1\end{pmatrix},\quad
\mathcal A(\mu,k) = \begin{pmatrix}
0&1&0&0\\ -(1+k_y^2\partial_y^2)&0&1&0\\ 0&0&0&1\\
\mu& 0&
-(1+k_y^2\partial_y^2)&0\end{pmatrix},\quad
\mathcal F(U) = \begin{pmatrix}0\\0\\0\\-u^3\end{pmatrix}.
\]
Here $U$ takes values in Sobolev spaces of periodic functions $U\in \prod_{j=0}^3 H^{3-j}_\mathrm{per}(0,2\pi)$
and $y$ was rescaled with $k_y$ to be of period $2\pi$. Grain boundaries are now heteroclinic orbits in the traditional dynamical systems sense, where the (infinite-dimensional) phase-space variable $U(x)$ converges to periodic orbits $U_\mathrm{r}^\pm$ for $x\to\pm\infty$.
The results in \cite{haragus2007,haragus2012,scheel2014} examine this dynamical system using center-manifold reduction and normal form theory. The ill-posed dynamical systems \eqref{e:ds} is reduced to an ordinary differential equation on a locally invariant manifold. The dynamics on this center-manifold describe the spatial ($x$-)evolution of profiles $U(x,y)$. In order to analyze these dynamics, normal form coordinate changes, analogous to averaging theory are employed, which eventually exhibit invariant subspaces within a higher-dimensional system of differential equations. In normal form, the reduced equation consists of coupled, stationary Ginzburg-Landau equations, which capture amplitudes of modes $\mathrm{e}^{\mathrm{i} (\kappa_x x+\kappa_y y)}$, where $\kappa_x^2+\kappa_y^2=1$, and $\kappa_y=\ell k_y$, $\ell \in \mathbb{Z}$. Invariant subspaces amount to setting amplitudes associated with $\ell\neq 1$ to zero and restricting to real amplitudes. The normal form equations had been derived much earlier, starting with the assumption that relevant modes consist \emph{only} of two differently oriented stripes, $\ell=\pm 1$, whose dynamics is then well described by a Newell-Whitehead-Segel amplitude equation \cite{malomed1990}.
After suitable scalings, the normal form equations read (assuming a non-resonance condition on the angle, $1/k_y\not\in\mathbb{Z}$),
\begin{equation}\label{e:nf}
\kappa_\ell^2 (C_\ell)''=-C_\ell(1-2\sum_{\ell'\neq \ell,\pm}|C_{\ell'}|^2-|C_{\ell}|^2),\quad |\ell|<1/k_y
\end{equation}
where $\kappa_\ell=\mathrm{sign}(\ell)\sqrt{1-\ell^2 k_y^2}$, $\ell\in \mathbb{Z}$; see \cite{scheel2014} for details.
These normal form equations possess pure mode equilibria $\underline{C}^*$ with $C_\ell=1$ for $\ell=\ell_*$, $C_\ell=0$ otherwise, which simply correspond to slanted stripes $\mathrm{e}^{\mathrm{i} (k_x x+\ell_* k_y y)}$ with $\ell_*$ maxima of $u$ across any section $x=x_0$, $y\in (0,2\pi)$. More interestingly, they also possess heteroclinic orbits connecting any two pure-more equilibria $\underline{C}^+$ and $\underline{C}^-$ with $\ell_*=\ell^\pm$; \cite{scheel2014,vandenberg2000,weth2013}. Asymptotic states of these heteroclinics at $x=\pm\infty$ possesses different orientations relative to the grain boundary and correspond to $(\ell_-,\ell_+)$ grain boundaries in our terminology; see Figure \ref{f:gbsc}.
\paragraph{Wavenumber selection.}
Inspecting the heteroclinics, in the leading-order amplitude equation, one finds that heteroclinics connect equilibria, only, not nearby periodic orbits $C_{\ell_*}\sim \mathrm{e}^{\mathrm{i} \varepsilon x}$. In other words, grain boundaries select wavenumbers in the far field. The stripes are well described by a nonlinear phase-diffusion equation far from the grain boundary. The effect of the grain boundary can then be thought of as an inhomogeneous Neumann boundary condition for the phase or, equivalently, an inhomogeneous Dirichlet boundary condition for the wavenumber. The effect of this inhomogeneous boundary condition spreads diffusively through the domain. This effect was illustrated in an amplitude approximation in \cite{malomed1990}. We demonstrate the diffusive spread in the Swift-Hohenberg equation in Figure \ref{f:sn}. We initialize the system in a strip with wavenumber $k=0.9$ away from the grain boundary. One clearly sees a change in wavenumber spreading from the grain boundaries into the domain, causing intermittent phase slips.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{figure3}
\caption{Snap shots of the time evolution of \eqref{e:sh} with $\mu=0.1$ in a doubly periodic box $[-80\pi,80\pi)\times[0,\pi/k_y)$, $k_y=0.8$, $N_x=2^{10},N_y=2^6$. Initial far-field stripes have an asymptotic wavenumber of 0.9. Grain boundaries select $k\sim 1$. Note that $x$-periodicity enforces two grain boundaries and the corrected wavenumber spreads from both into the bulk. }\label{f:sn}
\end{figure}
We extract the change in wavenumber from the solution directly, by computing first the analytic signal for fixed $y=y_0$, $z(x,y_0)=u(x,y_0)+\mathrm{i} \mathcal{H} u(\cdot,y_0)(x)$, where $\mathcal{H}$ is the Hilbert transform, and then extracting the wavenumber as $k(x,y_0)=(\mathop{\mathrm{Im}}\log z)'(x,y_0)$. We finally average over $y_0$ to obtain $\bar{k}(x)$ at each time step. A contour space-time plot of $\bar{k}$ is shown in Figure \ref{f:cont}.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{figure4}
\caption{Space-time contour plot of the instantaneous wavenumber of the solution from Figure \ref{f:sn}. One clearly sees a diffusive spread of the selected wavenumber $k\sim 1$ from the two grain boundaries, with the intermittent phase slips as singularities of the wavenumber at approximately $t\sim 602,993,1192$.}\label{f:cont}
\end{figure}
\paragraph{Symmetries and phase matching.}
Before investigating multiplicities of grain boundaries more closely, we recall the underlying relevant symmetries. The Swift-Hohenberg equation is invariant under translations $T^x_\xi$ and $T_\xi^y$ and reflections $R^x$ and $R^y$ in $x$ and $y$, respectively, and also possesses the up-down, or parity symmetry $S:u\mapsto -u$. Grain boundaries therefore necessarily come in two-parameter families, induced by translation in $x$ and $y$. In the center-manifold reduced equations, $y$-translations act as complex rotations on $C_\ell$, $y$-reflection conjugates $C_{\ell}$ and $C_{-\ell}$. An additional normal form symmetry allows independent complex rotations in all amplitudes $C_\ell$ at leading order. As a consequence, grain boundaries come in a degenerate 3-parameter family, where one can arbitrarily shift stripes on either side of the grain boundary parallel to the boundary. One expects that terms beyond the normal form would yield conditions for this relative shift. In \cite{haragus2007,haragus2012,scheel2014}, the reflection symmetry was used to show that grain boundaries that are symmetric in $x$ persist for the full system.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figure5}
\caption{Schematic sketch of phase matching, (a) in-phase (b) anti-phase, and (c) phase mismatch. Solutions (a) and (b) were proven to exist in \cite{haragus2012,scheel2014} and are computed here, existence of mismatched solutions is not known. }
\label{f:gbph}
\end{figure}
Using the same methods, one can also show that grain boundaries that are invariant under $R^xS$, $x\mapsto -x$, $u\mapsto -u$ exist in the full equation. We refer to these two types of grain boundaries as phase matched or anti-phase matched. It seems difficult to determine whether other $(1,-1)$ grain boundaries exist at small amplitudes; see Figure \ref{f:gbph} for an illustration of phase matched and anti-phase matched grain boundaries.
On the other hand, one can find asymmetric $(q_-,q_+)$ grain boundaries, connecting $C_{q_-}$ and $C_{q_+}$ in the normal form, using variational methods (see \cite{weth2013}; the existence is also stated in \cite{malomed1990}). Existence for the full equation then relies on solving a phase matching equation for the relative shift of stripes at the interface. Our numerical results below strongly indicate that such solutions do actually persist, that is, one can solve the phase-matching equation.
We emphasize that the results in \cite{scheel2014} yield existence of grain boundaries for arbitrary $k_y$ (effectively changing the angle), and $\mu<\mu_*(k_y)$ sufficiently small. However, since $\mu_*(k_y)\to 0$ for $k_y\to 0$, the results do not imply that grain boundaries exist for arbitrary angle and fixed $\mu>0$, sufficiently small.
\paragraph{Grain boundaries: Bifurcations.}
Our present study is motivated to a large extent by work on grain boundaries at finite amplitude, which predicts intriguing qualitative changes as the angle between the stripes becomes more acute; see the grain boundaries circled in red in Figure \ref{f:stadion} and the schematics in Figure \ref{f:acute}. In the weak bending regime, it has been well known \cite{cross1993,haragus2007} that grain boundaries can be described within the Cross-Newell phase approximation.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figure6}
\caption{Schematic picture of the formation of protrusions and defects at the grain boundary.}\label{f:acute}
\end{figure}
In other words, the change in orientation is well described to leading order by a slow change in orientation of the stripes across the interface. For more acute angles, a phase transition occurs, when protrusions form at the grain boundaries, effectively creating dislocations \cite{passot1994,ercolani2003,ercolani2009} or disclination pairs. The analysis in \cite{passot1994,ercolani2003,ercolani2009} predicts the onset of defect formation at the grain boundary theoretically with good accuracy, but is largely based on phase approximations which may loose validity near defects. On the other hand, the existence results in \cite{scheel2014} do not predict any bifurcations of grain boundaries.
Furthermore, we study a wealth of asymmetric grain boundaries, attempting a systematic description in terms of defects, resonances, and pinning effects. We encounter interesting bifurcations near limiting cases, when grain boundaries are parallel or perpendicular to the orientation of one of the stripes; see Figure \ref{f:perp}. In fact, normal form equations are more difficult in these resonant cases \cite{scheel2014} and had been analyzed in \cite{manneville1983grain}. The dynamics near such grain boundaries are quite intricate, governed by non-adiabatic pinning effects \cite{vinals2007}. We present some numerical results near these configurations in Section \ref{s:hor}.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figure7}
\caption{Sketch of interesting limiting grain orientations, where stripes are oriented parallel of perpendicular to the grain boundary. }\label{f:perp}
\end{figure}
\paragraph{Numerical approaches.} There appear to be few systematic numerical studies of grain boundaries beyond direct simulations. Most detailed results were obtained in \cite{ercolani2003}, where grain boundaries along $x=0$ were computed by imposing oblique boundary conditions at $x=\pm L_x$ and by suppressing patterns for large $|y|$ via a parameter ramp. On the other hand, solutions to the leading-order normal form equations \eqref{e:nf} can be readily computed as heteroclinic orbits to an ordinary differential equation \cite{malomed1990,hargus2012a}. Implementing such a view point for \eqref{e:shgb}, one would project \eqref{e:shgb} onto functions $u_N(x,y)=\sum_{|\ell|\leq N} u_\ell(x)\mathrm{e}^{\mathrm{i} \ell y}$. In the resulting system of ordinary differential equations, one would look for heteroclinic orbits that connect periodic orbits at $x=\pm\infty$. Pursuing this point of view, one would like to build on recent results in the dynamical systems literature on computation of heteroclinic orbits connecting periodic orbits and equilibria \cite{beyn1990,pampel2001,dieci2004,krauskopf2007,doedel2008a,doedel2008b}. Main ingredients in these approaches are a truncation to a finite interval $x\in (-L_x,L_x)$ with appropriate boundary conditions at $x=\pm L_x$, phase conditions that rule out translation symmetry in $x$ and other potential multiplicities, and finally appropriate discretizations of the ODE.
For boundary conditions at $x=\pm L_x$, one wishes to require that the solution lies in the stable and unstable manifold of the asymptotic state, respectively. These local stable and unstable manifolds can then be approximated to first order by their tangent spaces. In the case of periodic orbits, this involves a somewhat cumbersome construction of Floquet bundles, in addition to actually computing the limiting periodic orbit. One can easily envision that such computations become tedious and slow when the dimension of the system $N$ tends to infinity.
Our approach is similar in spirit, although it does not rely on a phase space interpretation. First, we forgo the construction of Floquet bundles and implement what we call zeroth order asymptotic boundary condition. Second, we construct appropriate phase conditions that eliminate spatial translations and neutral modes at $x=\pm\infty$. We then solve the resulting boundary-value problem directly using finite differences in $x$ and a pseudospectral method in $y$.
\paragraph{Outline.}
The remainder of this paper is organized as follows. In Section \ref{s:1}, we characterize grain boundaries and motivate our numerical approach, which combines a far-field-core decomposition with a domain truncation using zeroth order asymptotic boundary conditions. We refer to an appendix for a more detailed justification. Section \ref{s:4} is concerned with the numerical implementation of this truncated problem. In particular, we give numerical evidence for convergence as predicted in typical cases. In Section \ref{s:5}, we use the algorithm to study various phenomena associated with grain boundaries. We conclude with a discussion of other potential applications and extensions.
\begin{Acknowledgment}
D.L. acknowledges funding through the Institute for Mathematics and its Applications and the Faculty Research Support Fund (University of Surrey). A.S. acknowledges partial support from NSF under grants DMS-0806614 and DMS-1311740, support through a DAAD Faculty Research Visit Grant and a WWU Fellowship. A.S. and D.L. acknowledge support from the London Mathematical Society through a Research in Pairs Grant 41502.
The authors would like to thank J. Lega for many helpful conversations and comments on early versions of this manuscript, and for sharing her preliminary results with us. The authors also acknowledge discussions with D. Avitabile on numerical aspects of our approach.
\end{Acknowledgment}
\section{Continuing grain boundaries }
\label{s:1}
In this section, we characterize grain boundary as heteroclinic orbits, not necessarily close to onset. We then describe the inherent difficulties involved with computing grain boundaries in large boxes before laying out our approach via a far-field-core decomposition.
\paragraph{Characterizing grain boundaries.}
We start with a conceptual definition of grain boundaries. We fix $\mu$, throughout, and consider only orientation of grains as free parameters. We first assume the existence of a family of striped solutions $u_\mathrm{s}(kx;k)$, $k\in (k_\mathrm{min},k_\mathrm{max})\supset (k_\mathrm{zz},k_\mathrm{eck})$. A \emph{grain boundary} is a solution $u_*(x,y)$ which converges towards stripes of different orientations as $x\to\pm\infty$. We focus throughout on resonant angles, where the stripes at $\pm\infty$ possess wave vectors $\underline{k}^{\pm}=(k_x^\pm,k_y^\pm)$ that satisfy $k_y=k_y^+/q_+=k_y^-/q_-$ for integer $q_\pm$. We moreover assume minimal period in the $y$-direction (although this assumption can easily be removed). In summary, we require that $u_*(x,y)$ solves the stationary Swift-Hohenberg equation \eqref{e:sh}, with
\begin{itemize}
\item \emph{periodicity:} $u_*(x,y)=u_*(x,y+L_y)$, $L_y=2\pi/k_y$;
\item \emph{convergence:} $|u_*(x,y)-u_\mathrm{s}(k_x^\pm x+k_y^\pm y;k^\pm)|\to 0$, for $x\to\pm\infty$, uniformly in $y$;
\end{itemize}
here, $(k^\pm)^2=(k_x^\pm)^2+(k_y^\pm)^2$. Convergence and periodicity imply the resonance condition $k_y=k_y^+/q_+=k_y^-/q_-$. Uniform convergence is equivalent to convergence of derivatives, using the regularizing properties of the equation. In the following, we will use the rescaled variable $k_y y = :\tilde{y}$, in which we have $2\pi$-periodicity and convergence to $u_\mathrm{s}(k_x^\pm x+q_\pm \tilde{y};k^\pm)$, respectively. The rescaled equation for grain boundaries is, dropping the tildes for the $y$-variable,
\begin{equation}\label{e:gbr}
-(\partial_x^2+k_y^2\partial_y^2+1)^2u + \mu u - u^3=0.
\end{equation}
With the results in the small-amplitude limit, one expects such solutions to be locally unique up to translations in $x,y$. In particular, for any fixed $k_y$, there exist \emph{selected} wavenumbers $k_x^\pm$ at $\pm\infty$, for which a grain boundary exists. Equivalently, one finds a relation between the angles $\phi_\pm$, depicted in Figure \ref{f:gbsc}, and the selected wavenumbers ${k}^\pm$.
\paragraph{Computing grain boundaries --- the large box and its problems.}
A first naive approach to solving \eqref{e:gbr} would be to impose periodic boundary conditions on $(x,y)\in (-L_x,L_x)\times (0,2\pi)$. With periodic boundary conditions, one would effectively compute a pair of grain boundaries, possibly located at $x=0$ and $x=L_x$, respectively. One can then try to compute grain boundaries from an initial guess using Newton's method. A first difficulty is caused by the translations in $x$ and $y$, which yield non-uniqueness and a two-dimensional kernel of the linearization at such a solution. One would usually add appropriate phase conditions to eliminate these translations and add drift speeds in the $x$- and $y$-direction to set up a well-posed problem, expecting that drift speeds vanish at solutions,
\begin{align}
-(\partial_x^2+k_y^2\partial_y^2+1)^2u + c_x \partial_x u + c_y \partial_y u + \mu u - u^3&=0,\quad (x,y)\in (-L_x,L_x)\times (0,2\pi) +\mbox{``periodic'' b.c.},\label{e:box1}\\
\int_{x,y} (u-u_\mathrm{old}) \cdot \partial_x u_\mathrm{old} &=0,\label{e:box2}\\
\int_{x,y} (u-u_\mathrm{old}) \cdot \partial_y u_\mathrm{old} &=0.\label{e:box3}
\end{align}
It turns out that this somewhat standard approach to computations of patterns in periodic domains is viable here only for moderate sizes of $L_x$. Since the solution consists roughly of striped patterns in most of the domain, the linearization of \eqref{e:box1} at a solution will resemble the linearization at a striped pattern throughout most of the domain. For spectrum near the origin, this linearization is well approximated by the Laplacian from the phase-diffusion approximation (or a similar elliptic operator from the Cross-Newell equation). It will therefore inherit spectrum $\lambda_j\sim- j/L_x^2$, $j\in\mathbb{N}$, which accumulates at the origin for $L_x$ large; see \cite{ssabs,radss} for a general treatment of the behavior of continuous spectra under truncation of the domain. The phase conditions \eqref{e:box2}--\eqref{e:box3} can eliminate two neutral eigenvalues but do not resolve the ill-posedness as $L_x\to\infty$.
\begin{figure}[h]
\centering
\includegraphics[width=0.55\linewidth]{figure8}
\caption{Plot of the eigenvalues in $[-0.1,0.1]$ of the doubly-Fourier discretization of the Swift-Hohenberg equation about $u_s(y;1)$ with (a) ($N_x=20,L_x=10$) and (b) ($N_x=200,L_x=100$) and $N_y=20$. We see that for the same stepsize in $x$, as $L_x$ is increased there is an accumulation of eigenvalues at zero.}\label{f:abs}
\end{figure}
As a consequence, performance of Newton iterations deteriorates with increasing $L_x$. From this perspective, it is clear that this difficulty cannot be eliminated by the choice of separated boundary conditions, such as, say, oblique boundary conditions at $x=\pm L_x$, as used in \cite{ercolani2003}. Figure \ref{f:abs} illustrates this accumulation of eigenvalues near the origin and the resulting ill-posedness.
\paragraph{A remedy: far-field-core decomposition and asymptotic boundary conditions.}
A remedy to the presence of a family of neutral modes is an a priori ansatz for the solution in the far field. We explain the main strategy here and refer to the appendix for more details. One can verify that grain boundaries converge exponentially towards striped patterns, suggesting a decomposition of the solution via
\[
u(x,y)=w(x,y)+\chi_+(x)u_+(x,y)+\chi_-(x)u_-(x,y),\quad u_\pm(x,y)=
u_\mathrm{s}(k^\pm_x x+q_\pm y+\varphi^\pm;k^\pm),
\]
with smooth cut-off functions
\[
\chi_\pm(x)=1,\ \pm x>d+1,\qquad
\chi_\pm(x)=0,\ \pm x<d;
\]
see also \cite{morrissey2015} for a similar approach.
Substituting this ansatz into the Swift-Hohenberg equation, we find
\begin{equation}\label{e:fc0}
\mathcal{L}(w+\sum_\pm \chi_\pm u_\pm)-(w+\sum_\pm \chi_\pm u_\pm)^3=0,\quad \mathcal{L}=-(\partial_x^2+k_y^2\partial_y^2+1)^2+\mu,
\end{equation}
which can be written, after subtracting the equation for $u_\pm$, in the form
\begin{equation}\label{e:fc1}
\mathcal{L}w- \left\{\left(w+\sum_\pm \chi_\pm u_\pm\right)^3-\left(\sum_\pm \chi_\pm u_\pm\right)^3\right\}+\sum_\pm\left[\mathcal{L},\chi_\pm\right]u_\pm
+\left\{\sum_\pm\chi_\pm u_\pm^3-\left(\sum_\pm\chi_\pm u_\pm\right)^3 \right\}=0,
\end{equation}
where we used the commutator notation $[A,B]u=A(Bu)-B(Au)$. The expression in the last bracket can be viewed as a commutator between nonlinearity and cut-off functions, evaluated on stripe solutions. Note that the residual of \eqref{e:fc1} is exponentially localized when $w$ is, since commutators vanish for $|x|$ large. One may therefore expect that boundary conditions at finite $x=\pm L_x$ only contribute exponentially small corrections $\mathrm{O}(\mathrm{e}^{-\eta |L_x|})$ to the profile $w$ and wavenumbers $k_x^\pm$ and $k_y$.
Given that we are looking for $w$ to be exponentially localized, Dirichlet boundary conditions $w=w_{xx}=0$ at $x=\pm L_x$ appear to be a natural choice. Since neither $k^\pm$ nor $\varphi^\pm$ are known, they appear as additional free variables in the equation. Inspecting the geometry of a grain boundary, one readily sees that one can fix $\varphi^\pm=0$, after appropriate shifts in $x$ and $y$. From the point of view taken above, the equations $\varphi^\pm=0$ act as a phase condition normalizing $x$- and $y$-translations.
The remaining additional variables $k^\pm$ need to be compensated for by additional equations that eliminate multiplicities. Indeed, fixing $\varphi^\pm$ only eliminates translations of the solution if exponential localization of $w$ is enforced: otherwise, the difference between a grain boundary and its translates can simply be added to $w$. In a bounded domain, however, exponential localization cannot be strictly enforced since weighted and unweighted norms are equivalent. One therefore needs to add a condition on $w$ that eliminates asymptotics $w\sim \partial_x u_\pm$ for $x\sim L_x$. Our choice is
\begin{equation}\label{e:ph12}
\int_{y=0}^{2\pi}\int_{\pm x=L_x-2\pi/k_x^\pm}^{L_x} w(x,y)\cdot \partial_{\xi^\pm} u_\mathrm{s}(\xi^\pm;k^\pm)\mathrm{d} x\,\mathrm{d} y =0,
\end{equation}
where $\xi^\pm=k^\pm_x x +q_\pm y$.
As common with phase condition, the precise form of the condition is not crucial, but averaging over roughly a period appeared to work well.
From a different point of view, enforcing exponential localization of $w$ is a \emph{zeroth order asymptotic boundary condition}. In computations of homoclinic and heteroclinic orbits, one usually tries to use \emph{first order asymptotic boundary conditions}, approximating the stable manifold by its tangent space. However, boundary conditions in the form of an affine subspace \emph{transverse} to the unstable subspace at the asymptotic profile also give convergence as $L_x\to\infty$, with half the exponential rate \cite{beyn1990}, thus necessitating roughly twice the domain size $L_x$. We refer to such transverse subspaces as zeroth order asymptotic boundary conditions.
In our case, the periodic orbits come in a two-parameter family, parameterized by $k_x$ and $\varphi$, so that one would wish to approximate a strong stable subspace of the linearization. The computation of the strong stable subspace could prove quite cumbersome. One would need to construct the strong stable and center-unstable adjoint Floquet bundles to $\mathcal{L}-3u_\pm^2$, written as a first-order evolution operator in $x$ as in \eqref{e:ds}, and the associated spectral projection, which would typically be nonlocal in $y$. While such asymptotic Floquet boundary conditions have been successfully implemented in an ODE contexts \cite{beyn1990,pampel2001,dieci2004,krauskopf2007,doedel2008a,doedel2008b}, we believe that the computational overhead would not outweigh the gain of a factor two in domain size in our case.
Our choice of Dirichlet boundary conditions together with the phase condition \eqref{e:ph12} can be seen as a naive construction of a subspace transverse to the center-unstable subspace, which turns out to perform well in most cases. A dimension counting argument, detailed in the appendix, shows that the Dirichlet subspace together with the phase condition yields the correct dimension in a Fredholm sense, so that one may expect transversality to be generic and to fail only at a discrete set of angles (which of course still is a serious concern).
Summarizing, we solve
\begin{align}
\mathcal{L}\left(w+\sum_\pm \chi_\pm u_\pm\right)-\left(w+\sum_\pm \chi_\pm u_\pm\right)^3&=0,\qquad
(x,y)\in (-L_x,L_x)\times (0,2\pi) \label{e:bvp1}\\
w=w_{xx}&=0, \qquad (x,y)\in \{-L_x,L_x\}\times (0,2\pi)\label{e:bvp2}\\
\partial_y^j w(x,0) -\partial_y^j w(x,2\pi)&=0,\qquad x\in (-L_x,L_x), \ j=0,\ldots,3,\label{e:bvp3}\\
\int_{x=\pm L_x}^{\pm(L_x-2\pi/k_x^\pm)}\int_{y=0}^{2\pi} u'_\pm w\,\mathrm{d} y\,\mathrm{d} x&=0, \label{e:bvp4}\\
-\left((k^\pm)^2\frac{\mathrm{d}^2}{\mathrm{d} \xi^2}+1\right)^2 u_\pm+\mu u_\pm-u_\pm^3&=0,
\qquad \xi\in (0,2\pi)
\label{e:bvp5}\\
\frac{\mathrm{d}^j}{\mathrm{d} \xi^j}u_\pm(0)-\frac{\mathrm{d}^j}{\mathrm{d} \xi^j}u_\pm(2\pi)&=0, \qquad j=0,\ldots,3, \label{e:bvp6}
\end{align}
where the first equation \eqref{e:bvp1} can also be written in the form \eqref{e:fc1}. We think of this system as an equation in $k_x^\pm$ and $w$, where \eqref{e:bvp5}--\eqref{e:bvp6} are used for given $k_x^\pm$ (which gives $k^\pm=\sqrt{(k_x^\pm)^2+k_y^2}$) to obtain $u^\pm$, which is then inserted into \eqref{e:bvp1}.
In the next section, we detail how we discretize this system of equations. In the appendix, we motivate why this decomposition actually gives a well-posed, truncated boundary-value problem, uniformly in $L_x$.
We also consider the spectrum of the linearization of \eqref{e:bvp1} with respect to $w$, at a solution $w_*, u_\pm^*$
\begin{equation}\label{e:bvplin}
\mathcal{L}_* w=\mathcal{L}w-3\left(w_*+\sum_\pm \chi_\pm u^*_\pm\right)^2 w,\qquad
(x,y)\in (-L_x,L_x)\times (0,2\pi),
\end{equation}
supplemented with Dirichlet and periodic boundary conditions \eqref{e:bvp2}--\eqref{e:bvp3} as a rough indicator for temporal stability. We did not attempt to construct asymptotic boundary conditions for the linearization but notice that results as in \cite{ssabs} guarantee that Dirichlet boundary conditions are zeroth order asymptotic outside of the continuous spectrum of the stripes, except for a possibly finite set of eigenvalues $\lambda$.
\section{Implementation and convergence}
\label{s:4}
We describe details of discretization and implementation of the continuation procedure, and demonstrate convergence and robustness of the algorithm.
\paragraph{Discretization and implementation.}
We detail the numerical implementation of the grain boundary problem described in \eqref{e:bvp1}--\eqref{e:bvp6}. The one-dimensional periodic orbits $u_{\pm}$ \eqref{e:bvp5}--\eqref{e:bvp6} are computed on a domain, $\xi\in[0,2\pi)$ with a Fourier pseudo-spectral method; see~\cite{trefethen2000}. In order to interpolate the periodic orbits $u_{\pm}(\xi)$ to the skew coordinates $k_x^{\pm}x+q_\pm y$, we use a band-limited interpolant~\cite[Chapter 3]{trefethen2000}.
\begin{figure}[h]
\centering
\includegraphics{figure9}
\caption{Computational domain for the remainder function $w(x,y)$. We use a Fourier pseudo-spectral discretization in the $y$-direction and a fourth-order finite-differences method in the $x$-direction.\label{f:comp} }
\end{figure}
The computational domain for the remainder function $w(x,y)$ is shown in Figure~\ref{f:comp}. We use the same Fourier pseudo-spectral discretization in the $y$-direction and a standard fourth-order finite-differences methods (see~\cite{leveque2007}). We take the cut-off functions $\chi_{\pm}$ to be $\chi_+(x) = (1+\mbox{tanh}(m(x-d)))/2$ and $\chi_-=1-\chi_+$. The integral phase conditions \eqref{e:bvp4}, are computed using a trapezoidal rule in both $x$ and $y$. The Jacobian for the (now algebraic) system of equations, is explicitly computed with respect to the remainder function $w(x,y)$ and a first order finite-difference is used for the Jacobian with respect to the asymptotic wave numbers $k_x^{\pm}$. The nonlinear algebraic system is then solved for $(w,k^+_x,k^-_x)$ using a trust-region Newton method~\cite{coleman1996}. Parameter exploration is carried out using secant pseudo-arclength continuation~\cite{krauskopf2007}.
The scheme is implemented in \textsc{matlab} (version 2014b) where typical discretizations in $x$ are $N_x=1000$ mesh points on $x\in[-40\pi,40\pi]$ and $N_y=40$ Fourier collocation points in the $y$-direction. For the cutoff functions, typical values are $m=1,d=100$.
As starting conditions for Newton iterations we used sharp interface grain boundaries, that is, stripes of piecewise constant orientation.
Temporal stability of the grain boundaries is calculated by computing eigenvalues of the linear operator defined in \eqref{e:bvplin}. We use the same spatial discretization as for the computation of the grain boundaries, yielding a large matrix eigenvalue problem that we solve using \textsc{matlab}'s \verb1eigs1 command that uses an Implicitly Re-Started Arnoldi Iteration~\cite{lehoucq1996,sorensen1992}.
We next show results that illustrate the robustness and convergence of the algorithm. As a measure for convergence, we used the selected wavenumbers $k^\pm$. We noticed those wavenumbers converge as might be expected to the wavenumber at the zigzag boundary; see \S\ref{s:zz}. We therefore computed the zigzag (transverse) instability of 1D stripes in AUTO07p~\cite{auto07p}. To do this, we solve for the 1D stripes, $u(x)$, and compute the transverse instability criterion, i.e., $\lambda_t$,
\begin{equation}\label{e:zigzag}
\lambda_t = 2\frac{\langle(\partial_x^2-1)u_x,u_x\rangle}{\langle u_x,u_x\rangle},
\end{equation}
where $\lambda_t$ is the Eigenvalue associated with transverse perturbations of the form $\hat u e^{iky}$. If $\lambda_t<0$, the 1D stripes are transversely stable and the zigzag instability boundary occurs when $\lambda_t=0$; see~\cite{mielke1997}. Setting $\lambda_t=0$ allows one to fix the 1D stripe wavenumber, $k_{\mathrm{zz}}$. We compute the 1D stripes in AUTO to a relative tolerance of $10^{-6}$ and the zigzag criterion boundary to a relative tolerance of $10^{-10}$.
\paragraph{Convergence of the algorithm.}
We present results on the convergence of the algorithm and its sensitivity to the computational parameters $N_x,N_y,L_x,m$ and $d$. We will also illustrate the effectiveness and limitations of the phase conditions.
Our test case will be a weakly bent symmetric grain boundary at $\mu=1$ in~(\ref{e:sh}) where we take $k_y=0.85$. The selected asymptotic wavenumber $k^{\pm}$ of the grain boundary is the critical zigzag instability wavenumber of the 1D stripes i.e. $k_\mathrm{zz}=0.9991$. In Figure~\ref{f:convergence1}, we plot the difference (error) between the computed asymptotic wave number and the zigzag wavenumber $k_\mathrm{zz}$.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figure10}
\caption{Convergence of the selected wavenumber of the far-field stripes for (a) $\mu=1,k_y=0.85,n_y=40,L_x=80\pi,d=100$, $n_x$ is the number of finite-difference points used in the $x$-direction (b) $\mu=1,k_y=0.85,n_x=1000,L_x=80\pi,d=100$. Error defined as the difference from the selected wave number with $\mu=1,k_y=0.85,n_y=40,n_x=1000, L_x=80\pi,d=100$.\label{f:convergence1}}
\end{figure}
We see in Figure~\ref{f:convergence1}, that even for rather crude discretizations the asymptotic wave number of the stripes of the grain boundary is very well approximated. We see in particular spectral (geometric) convergence as we increase the number of Fourier collocation points $N_y$.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figure11}
\caption{Convergence of the selected wavenumber of the far-field stripes for (a) $\mu=1,k_y=0.85,n_y=40,d=50, n_x=500$, varying $L_x$ and (b) varying the cutoff $d$ with $L_x=40\pi$.\label{f:convergence2}}
\end{figure}
In Figure~\ref{f:convergence2}, we show how the selected asymptotic wavenumber $k^{\pm}$ depends on $L_x$ and the cutoff distance $d$. We see in Figure~\ref{f:convergence2}(a) that, if $L_x$ is greater than about twice the cutoff distance $d$, then the far-field selected wavenumber is independent of $L_x$ (here $k_y=0.85$). In particular, we find that the length of the domain needs to be sufficiently large such that the remainder function ,$w$, is zero for one far-field skewed stripe. However, we see that for even small $L_x$ the selected far-field wavenumber is reasonably accurate. In Figure~\ref{f:convergence2}(b) we see that the cutoff distance, $d$ has almost no affect on the selected wavenumber, $k$; see for instance~\cite{beyn1990} for results of exponential convergence in $L_x$ of heteroclinic orbits.
Next we compute the condition number of the system \eqref{e:bvp1}--\eqref{e:bvp6} about $w(x,y)$ as we vary the discretization in $x$. We use \textsc{matlab}'s \verb1condest1 routine~\cite{hager1984} to compute a lower bound for the 1-norm condition number. We find that the condition number estimate is of order $10^9$ for typical discretizations (significantly less than machine precision) and robust with respect to changes in the computational parameters.
\begin{figure}[h]
\centering
\includegraphics{figure12}
\caption{Plot of $w(x,y)$, in the situation where we observe the emergence of non-vanishing tails outside $x\in[-d,d]$; $\mu=1,k_y=9.636267\times10^{-2},n_y=20,n_x=500,L_x=40\pi,d=40$.\label{f:loss_transverse}}
\end{figure}
The rate of exponential convergence is related to the temporal stability of the stripes through the complex dispersion relation, as we will explain in the appendix. Indeed, for both horizontal rolls and for large vertical periods, $k_y\ll 1$, the zigzag instability manifests itself via slowly decaying tails of $w$. Figure ~\ref{f:loss_transverse}, shows the remainder function, $w(x,y)$ for $k_y=9.636267\times10^{-2}$. Note in particular the tails in $w(x,y)$ outside the cut-off window $x\in[-d,d]$.
\section{Applications}
\label{s:5}
We apply the numerical procedures outlined above to study grain boundaries in the Swift-Hohenberg equation \eqref{e:sh}. Fixing the parameter $\mu$, we continue grain boundaries in the angle and study their properties and possible bifurcations. We start by investigating wavenumber selection, in particular the fact that grain boundaries tend to select marginally stable stripe patterns at the zigzag boundary in Section \ref{s:zz}. After briefly discussing phase selection at the interface, Section \ref{s:phase}, we focus on the behavior of grain boundaries as the angle is varied from obtuse to acute, Section \ref{s:path}. We then study grain boundaries with $(q_-,q_+)$ different from $(1,-1)$, exhibiting an interesting bifurcation near grain boundaries interfacing horizontal stripes in Section \ref{s:hor}. Finally, we show pinning effects in Section \ref{s:other}, when the core of grain boundaries widens to contain patches of vertical stripes. We also show that grain boundaries between spots are significantly more complicated, with more dominant pinning effects leading to snaking bifurcation diagrams.
\subsection{Selection of marginally stable stripes}\label{s:zz}
At small amplitude, $\mu\ll 1$, the leading-order description of grain boundaries via the amplitude equation shows that grain boundaries select wavenumbers, that is, for a fixed angle, grain boundaries exist only for a particular wavenumber in the far field. at leading order in $\mu$, this wavenumber is $k=1$ in the Swift-Hohenberg equation. We demonstrate here numerically that this property holds also at finite amplitude, and show that the selected wavenumber agrees with the wavenumber defined by the zigzag boundary in~(\ref{e:zigzag}); see Figure \ref{f:symmetric_gb_k}. The numerical discrepancy is less than $10^{-10}$. For very acute angles, that is, large vertical period $2\pi/k_y$, the resolution in $y$ is poor and we observe discrepancies. For very obtuse angles, weak bending, convergence of grain boundaries to stripes is slow in $x$ \cite{haragus2007} which also introduces numerical inaccuracies.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{figure13}
\caption{Wavenumber $k$ of stripes selected by the symmetric grain boundary vs $\mu$ is varied, angle fixed at $\phi_+=0.5534$, $k_y=0.85$, (a) and wavenumber $k$ vs. angle $\phi_+=\mbox{arccos}(k_y/k)$ (compare Fig. \ref{f:gbsc}), parameter $\mu=1$ fixed. Computational parameters are $n_x = 600,L_x=10\pi,n_y=20$.}
\label{f:symmetric_gb_k}
\end{figure}
\paragraph{Variational reasons for marginal stability.}
A possible reason for the selection of marginally zigzag stable stripes can be seen by looking at the energy of the Swift-Hohenberg equation. The Swift-Hohenberg equation~(\ref{e:sh}) is a gradient flow
\[
u_t = -\nabla\mathcal{E}(u),
\]
in $H^2(\mathbb{R}^2)$, where the energy functional $\mathcal{E}$ is given by
\begin{equation}\label{e:energy_SH}
\mathcal{E}(u) = \int_{\mathbb{R}^2}\left[\frac{1}{2}[(1+\Delta)u]^2 -\frac{1}{2} \mu u^2+ \frac{1}{4}u^4\right]\mbox{d}\mathbf{x},\qquad \mathbf{x}\in\mathbb{R}^2,
\end{equation}
and the gradient $\nabla\mathcal{E}(u) = \frac{\delta\mathcal{E}}{\delta u}(u)$ of $\mathcal{E}$ with respect to $u$ is computed with respect to the $L^2(\mathbb{R}^2)$ inner product. Equilibria of the Swift-Hohenberg equation are critical points of the energy functional. Since grain boundaries appear to be stable, it is natural to look for grain boundaries as local minimizers of $\mathcal{E}$. To our knowledge, an existence proof that constructs grain boundaries as minimizers using methods from the calculus of variations is not available, even looking beyond the example of the Swift-Hohenberg equation.
On the other hand, one can envision initializing a system with an interface between stripes of different orientation as we did in Figures \ref{f:sn} and \ref{f:cont}, which leads to mixing of wavenumbers in the far field. Since in this process, the energy of the system decreases, one concludes that the wavenumber selected by the grain boundary necessarily should correspond to the wavenumber of stripes with minimal energy per unit length. More precisely, one can define the average energy of a stripe $u_\mathrm{s}(\xi;k)$, $\xi=kx$, as
\[
\mathcal{E}(k) = \frac{1}{2\pi}\int_0^{2\pi}\left[\frac12[(1+k^2\partial_\xi^2)u]^2 - \frac12\mu u^2 + \frac14u^4\right] \mathrm{d} \xi,
\]
and minimize with respect to $k$. The minimum is then attained at the zigzag boundary, $k=k_\mathrm{zz}$. Renormalizing the energy,
\begin{equation}\label{e:energy_SHr}
\mathcal{E}_\mathrm{re}(u) = \int_{\mathbb{R}^2}\left[\frac{1}{2}[(1+\Delta)u]^2 -\frac{1}{2}\mu u^2 + \frac{1}{4}u^4-\mathcal{E}(k_\mathrm{zz}) \right]\mbox{d}\mathbf{x},\qquad \mathbf{x}\in\mathbb{R}^2,
\end{equation}
one expects to find a local minimum at a grain boundary at finite energy when restricting to functions with periodicity $2\pi/k_y$ in $y$, and evaluating integrals on a fundamental domain.
\paragraph{Hamiltonian reasons for marginal stability.}
It turns out that, given existence of a grain boundary, one can use the variational structure to conclude that grain boundaries select the zigzag critical wavenumber, exploiting the fact that the spatial dynamics formulation \eqref{e:ds} defines an ill-posed Hamiltonian equation. As a consequence of Noether's theorem, the equation then possesses conserved quantities associated with the continuous symmetries of the equation, namely translations in $x$ and $y$. To be more precise, in the notation from \eqref{e:ds}, consider the symplectic form generated by $L^2$-inner product and the skew-symmetric matrix $J$,
\[
J=\begin{pmatrix}
0&0&0&1\\
0&0&-1&0\\
0&1&0&0\\
-1&0&0&0
\end{pmatrix},
\qquad J^T=-J=J^{-1},\ J^2=-\mathrm{id},
\]
which, writing $q=(u,u_1)^T$, $p=(v,v_1)^T$, is simply the standard symplectic form, and the Hamiltonian
\[
H[\underline{u}]=\int_y h(\underline{u}),\quad h(\underline{u})=-\frac{1}{2} v^2+u_1v_1 + v(u_{yy}+u)+G(u),\ \underline{u}=(u,u_1,v,v_1)^T,\ G(u)=-\frac{\mu}{2}u^2+\frac{1}{4}u^4,
\]
where we have re-scaled $y$ to be of period $2\pi/k_y$.
Then \eqref{e:ds} can be written in the form
\[
\underline{u}_x=J\nabla_{L^2}H[\underline{u}],
\]
and the Hamiltonian $H$ is conserved. In addition, the translation symmetry in $y$ induces an additional conserved quantity $S$ which we will refer to as momentum,
\[
S[\underline{u}]=\int_y s(\underline{u}),\quad s(\underline{u})=u (v_1)_y+v (u_1)_y,\quad J\nabla_{L^2}S[\underline{u}]=\partial_y\underline{u},
\]
and, in particular, for solutions of \eqref{e:ds},
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d} x} H[\underline{u}(x,\cdot)]=
\frac{\mathrm{d}}{\mathrm{d} x} S[\underline{u}(x,\cdot)]=0.
\end{equation}
As a consequence, both Hamiltonian and momentum are equal on asymptotic stripes of grain boundaries,
Slightly abusing notation, define $H(k):=H[\underline{u}_\mathrm{s}^k]$, $S(k):=S[\underline{u}_\mathrm{s}^k]$, where $u_\mathrm{s}^k$ is the striped pattern with wave vector $k=(k_x,k_y)$, and write $k^\pm$ for the asymptotic wave vectors of a grain boundary. Further writing $u_\mathrm{s}=u_*(k_x x + k_y y;|k|)$, we obtain
\begin{align}
H(k)&=\int_\xi \left(\frac{1}{2}k_y^4-\frac{3}{2}k_x^4-k_x^2k_y^2\right)(u_*'')^2 + (k_x^2-k_y^2)(u_*')^2+\frac{1}{2}u_*^2+G(u_*),\nonumber\\
S(k)&=2k_xk_y\int_\xi |k|^2 (u_*)''-(u_*')^2.
\end{align}
Marginal zigzag stability occurs when $\int_y |k|^2 \left(u_*''\right)^2-\left(u_*'\right)^2=0$, as one readily verifies by minimizing the energy of stripes. Therefore, $S=0$ precisely when $k=k_\mathrm{zz}$. For symmetric grain boundaries, $k_x^-=-k_x^+$, $k_y^-=k_y^+$ such that $S(k^-)=S(k^+)$ implies $|k^-|=|k^+|=k_\mathrm{zz}$.
One also readily verifies that for $|k|=k_\mathrm{zz}$,
\[
H(k)=\int_\xi -\frac{1}{2}|k|^4 (u_*'')^2 + \frac{1}{2}u^2 G(u_*),
\]
depends only on $|k|$, implying that arbitrary orientations of marginally zigzag stable stripes are compatible.
\paragraph{Non-variational effects --- selection of stable stripes.}
As noted above, the zigzag boundary is usually associated with an orientational instability induced by the fact that stripes can reduce their local wavelength through local shear in the direction of the wave vector. As a consequence, one notices an instability of stripes with wavenumber smaller than the energy-minimizing zigzag wavenumber. More directly, one sees that the linearization of stripes becomes unstable as the wavenumber $k$ is decreased through $k_\mathrm{zz}$. More precisely, the linearization
\[
\mathcal{L}_\mathrm{s}v=-(\Delta+1)^2 v + \mu v - 3 u_\mathrm{s}(kx;k)v,
\]
can be written in Fourier-Bloch space as
\[
L_\mathrm{s}(\ell,\sigma;k)w=-\left((\partial_x+\mathrm{i}\sigma)^2-\ell^2\right)^2 w +\mu w-3 u_\mathrm{s}(kx;k)w,
\]
where $L_\mathrm{s}$ is posed on $2\pi$-periodic functions, with Fourier-Bloch parameter $\sigma,\ell$. The eigenvalue $\lambda=0$ associated with translation $u_\mathrm{s}'$ at $\sigma=\ell=0$, possesses an expansion
$\lambda(\sigma,\ell)=-d_{||}(k)\sigma^2-d_\perp(k) \ell^2 + \mathrm{O}(4)$, where $d_{||}$ changes sign at $k=k_\mathrm{zz}$ where $d_{||}$ and $d_{\perp}$ correspond to perturbations parallel and perpendicular to the wave vector; see~\cite{cross1993,mielke1997}. In this sense, stripes at the zigzag boundary are marginally stable in the family of stripes.
We demonstrate below that the variational characterization of the zigzag boundary is responsible for the selection by grain boundaries, rather than the marginal stability. We therefore perturb the Swift-Hohenberg equation~(\ref{e:sh}), adding $\alpha(\nabla u)^2u$ to the right hand side. As a consequence, variational characterizations of the zigzag boundary are not available anymore. On the other hand, the sign change of $d_{||}$ still occurs at some critical wavenumber $k_\mathrm{zz}(\alpha)$. We computed both this marginally stable wavenumber $k_\mathrm{zz}(\alpha)$ and the wavenumber selected by the grain boundaries. The results show that grain boundaries always select zigzag stable stripes. Marginally stable stripes are selected only in the variational case $\alpha=0$; see Figure \ref{f:nonvar_selection}. It would be interesting to understand this rigidity theoretically, that is, explain the fact that selected wavenumbers move towards stable stripes upon addition of non-variational terms.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figure14}
\caption{ (a) Selected wavenumber $k$ as a function of the angle of the selected far-field stripes for $\mu=1$ and $\alpha=-0.5$, compared to the wavenumber of zigzag marginally stable stripes. (b) Selected wavenumber $k$ as a function of $\alpha$ for fixed $k_y=0.85$ and $k_y=0.65$ . Here, a term $+\alpha(\nabla u)^2u$ has been added to the right-hand side of the Swift-Hohenberg equation.\label{f:nonvar_selection}}
\end{figure}
\subsection{Phase selection at grain boundaries --- non-adiabatic effects}\label{s:phase}
In the normal form at small amplitudes, there exists a family of grain boundaries, in which stripes at $\pm\infty$ can be shifted vertically relative to each other. One expects this normal form or averaging symmetry to be present at all orders in an expansion, while terms beyond all orders enforce the selection of a relative vertical phase of asymptotic stripes at the grain boundary. In a simplistic picture, one can envision effective gradient dynamics on the circle of grain boundaries parameterized by the relative phase, with at least two critical points. The proofs in \cite{haragus2012} show that even and odd (in $x$) grain boundaries persist, with a phase-mismatch of $0,\pi$, respectively, at $x=0$. We computed odd grain boundaries and showed that they possess properties similar to even grain boundaries, that is, they select zigzag marginally stable stripes; see Figure \ref{f:anti_phase}. Consistent with the simplistic effective dynamics on the circle of relative phases, we find that these anti-phase matched grain boundaries are temporally unstable for all angles.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{figure15}
\caption{Selected wavenumber of far-field stripes for the odd grain boundary as a function of the angle between stripes with contour plot of computed odd grain boundary as inset. Computational parameters: $n_x = 1000,L_x=10\pi,n_y=20$; sub-panel is on $(x,y)\in[-30,30]\times[-\pi,\pi)$.}\label{f:anti_phase}
\end{figure}
\subsection{Acute angles: from grain boundaries to dislocations and disclinations}\label{s:path}
Phenomenologically, one is interested in the behavior of grain boundaries as the angle of asymptotic stripes is changed. This section and the next present a detailed numerical study, continuing grain boundaries in the combined angle. This section is concerned with the simplest case, $(1,-1)$, even grain boundaries. We turn to asymmetric grain boundaries, with different $(q_-,q_+)$, in the next section.
\paragraph{Continuation to acute angles.}
Recall that symmetric grain boundaries are symmetric with respect to reflections $x\mapsto -x$, and in addition with respect to a parity-shift transformation, $u\mapsto -u$, $y\mapsto y+\pi$. For obtuse angles $\phi_+$ (compare Figure \ref{f:gbsc} for definition of angles), the results in \cite{haragus2007} show that these are the only possible grain boundaries. We continue this branch for fixed $\mu=1$ in the angle. We do not impose reflection or parity-shift symmetries. Figure \ref{f:GB_dis} shows the results of this computation. We see that the primary branch of symmetric grain boundaries continues to arbitrarily small angles. As we continue, the shape of grain boundaries changes, as a protrusion at the interface develops. Eventually, for acute angles, the symmetric grain boundary consists of a pair of dislocations, conjugate to each other by the parity-shift transformation. We find, however, that this symmetric, primary branch is unstable for a range of angles, against perturbations that break the parity-shift symmetry. The primary branch restabilizes for yet smaller angles. The left panel of Figure \ref{f:GB_dis} shows typical grain boundaries along the primary branch.
\paragraph{Parity-shift breaking bifurcations: destabilization and restabilization of the primary branch.}
\begin{figure}[tbhp]
\centering
\includegraphics[width=0.75\linewidth]{figure16}
\caption{(a) Bifurcation diagram of $(1,-1)$-grain boundaries and (b) instability interval of primary branch as a function of $\mu$ in the shaded region, bounded by $\phi_\mathrm{pf,1}$ (upper red curve) and $\phi_\mathrm{pf,2}$ (lower red curve). Shown below are several sample profiles along primary and secondary branch. the primary grain boundary branch showing the first pitchfork branch. Various grain boundary profiles are depicted.\label{f:GB_dis} }
\end{figure}
The primary branch destabilizes at an angle $\phi_\mathrm{pf,1}(\mu)$ in a parity-shift symmetry breaking pitchfork bifurcation. It remains unstable until a smaller angle $\phi_\mathrm{pf,2}(\mu)$ is reached. At $\phi_\mathrm{pf,1}(\mu)$, grain boundaries with broken parity-shift symmetry bifurcate. We continued this bifurcating branch down to small acute angles and did not detect further bifurcations or instabilities along this asymmetric branch. In particular, the bifurcating branch does not reconnect to the primary branch at $\phi_\mathrm{pf,2}(\mu)$. We suspect that at $\phi_\mathrm{pf,2}(\mu)$ an unstable branch of grain boundaries bifurcates from the primary branch, separating the basins of attraction of the two stable branches for small angles, but we were not able to continue this secondary branch. The right panel in Figure \ref{f:GB_dis} depicts typical profiles along the bifurcated branch. The top panels in Figure \ref{f:GB_dis} show the bifurcation diagram for $\mu=1$, with $k_y\approx0.6$ at the first pitchfork bifurcation ($\phi_\mathrm{pf,1}(\mu=1)\sim 0.962$) and the instability interval $(\phi_\mathrm{pf,2}(\mu),\phi_\mathrm{pf,1}(\mu))$ for a $\mu\in (0,1)$. We note that, in agreement with the discussion in Section \ref{s:zz}, selected wavenumbers agree with the zigzag stability boundary for both primary and secondary branch.
Numerics suggest that $\phi_\mathrm{pf,1/2}(\mu)\to \pi/2$ for $\mu\to 0$. This is in agreement with the small amplitude bifurcation analysis in \cite{scheel2014}, where grain boundaries where constructed for $0<\mu<\mu_\mathrm{max}(\phi_+)$, where $\mu_\mathrm{max}(\phi_+)>0$ could converge to zero as $\phi_+\to \pi/2$. In particular, the analysis in \cite{scheel2014} did not suggest any bifurcations along the primary branch of grain boundaries, and we suspect that the bifurcation observed here is outside of the range of validity of the analysis there.
The parity-breaking bifurcation is related to observations in ~\cite{ercolani2003}.
Ercolani~{\it et al.}~\cite{ercolani2003}, numerically showed how disclinations form at the grain boundaries as the angle between the asymptotic stripes $\phi$ becomes large. We note in particular the result of the direct simulation, reproduced in Figure \ref{f:stadion}, where a family of stripes creates a grain boundary with continuously decreasing angle $\phi_+$. One clearly notices the qualitative change induced by the parity-shift breaking pitchfork bifurcation.
\paragraph{Defects along primary and secondary branches.}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{figure17}
\caption{(a) Plot of $u(0,y)$ of the primary grain boundary at $k_y=0.3$, with turning points indicated by circles ;(b) plot of defect locations as $k_y$ is varied for both primary (blue) and secondary branches (gold and red, resp.). The vertical dotted lines denote the locations of the pitchfork bifurcations at $\phi_\mathrm{pf,1}$ and $\phi_\mathrm{pf,2}$; (c) plot of the primary grain boundary at $k_y=0.3$ with defect locations indicated by circles corresponding to (a).\label{f:defect}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figure18}
\caption{(a) Energy of primary and secondary symmetric branches; (b) energy difference between primary (higher energy) and secondary (lower energy) branches.\label{f:energy1}}
\end{figure}
At least phenomenologically, grain boundaries can be described in terms of defects that may or may not be created when stripes of different orientation create interfaces. The work in \cite{passot1994,ercolani2003,ercolani2009} provides a rationale for the emergence of defects in a largely model independent context, relying rather on the description of stripes via phase modulation equations, and the crucial fact that phase gradients may exhibit apparent singularities since wave vectors are directors rather than vectors due to the underlying reflection symmetries of stripes.
Phenomenologically, one notices in Figure \ref{f:GB_dis} that, as the angle $\phi_+$ is decreased, distinct point defects develop at the grain boundary. Along the primary branch, a pair of dislocations, conjugate by the parity-shift symmetry, develops. Along the asymmetric branch, these two conjugate dislocations split into disclinations, two of which cancel, the other two forming a bound state; see Figure \ref{f:schematic}, below for more schematic depictions of grain boundaries.
We therefore try to track the emergence of defects at the grain boundary in this particular example of the Swift-Hohenberg equation. A difficulty one faces in such characterizations is that the location, or even existence of a point defect is not universally defined, rendering the preceding phenomenological discussion imprecise. One usually looks for a singularity of the phase or its gradient, which however requires an unambiguous definition of the phase almost everywhere.
Here, we define (somewhat arbitrarily) a defect of an even grain boundary to be a critical point of the profile $u(0,y)$; see Figure~\ref{f:defect}(a) \& (c). Note that such critical points correspond to critical points of $u(x,y)$ since $u(x,y)=u(-x,y)$ such that $x$-derivatives vanish at $x=0$. Figure~\ref{f:defect}(b), shows defects of primary and secondary branches as we vary the angle (alias $k_y$). Note that we also track the global maximum and minimum, which exists also for obtuse angles due to periodicity, but our interest is in newly emerging critical points.
We observe that the primary branch develops four defects (two pairs, conjugate by parity-shift symmetry) just before the re-stabilizing pitchfork at $k_y\approx0.4$. For the secondary pitchfork branches, we observe that maximum and minimum continue from those of the primary branch at $k_y\approx0.6$. Later, two new defects develop at $k_y\approx0.5$. We found further critical points for small $k_y$ but chose not to indicate them in Figure~\ref{f:defect}(b) as second derivatives were small at these points, indicating that they do not give well defined defect locations.
\paragraph{Energy of grain boundaries along primary and secondary branches.}
In the region of bistability, one can attempt to derive a selection criterion for grain boundaries based on the energy. Since grain boundaries converge exponentially to stripes with energy-minimizing wavenumber, they possess finite renormalized energy $\mathcal{E}_\mathrm{re}$ as defined in \eqref{e:energy_SHr}. We computed the energy of the marginally stable stripes $\mathcal{E}_\mathrm{zz}$ using AUTO07p and used this result to compute the renormalized energy of the grain boundaries. The results are shown in Figure~\ref{f:energy1}. As expected, we notice that for weak bending, that is, obtuse angles, $\phi_+\to \pi/2$, $k_y \to k_\mathrm{zz}$, the energy tends to zero. The energy increases monotonically as the angle is decreased. Energies of primary and secondary branch differ very little. In the bistability region for $k_y\lesssim 0.6$ the energy of the secondary branches is slightly lower than the energy of the primary branch thus indicating a weak preference for parity-shift broken grain boundaries at small angles.
\subsection{Asymmetric grain boundaries --- small resonances}\label{s:hor}
Generally, grain boundaries involve two angles $\phi_+$ and $\phi_-$ of stripes relative to the grain boundary interface. In this regard, the grain boundaries considered thus far are a very special subclass. We describe here how to study asymmetric grain boundaries, $\phi_+\neq \phi_-$, with several restrictions. First, our approach is tied to the case of resonant angles. Second, resonances where $q_\pm$ are large require small $k_y$, or, equivalently, large $y$-periods, which increases computational cost significantly. We therefore limit ourselves to resonances where $q_\pm$ are small. The results do suggest however a building pattern for resonant grain boundaries with large $q_\pm$ or even non-resonant grain boundaries.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figure19}
\caption{(a) Bifurcation diagram for $(2,1)$ (blue) and $(2,-1)$ (red) grain boundaries as a function of $k_y$ is shown. (b) and (c) show the same bifurcation diagram with $k^\pm$ versus $k_y$ or combined angle $\phi_\mathrm{c}$, respectively; for large angles, small $k_y$, $k^\pm=k_\mathrm{zz}$, which is also the wavenumber on the $(2,-1)$-branch. Sample plots correspond to labels on solutions branches. \label{f:21GB}}
\end{figure}
We focus on grain boundaries with $q_-\neq q_+$ and comment only briefly on other cases. Grain boundaries considered here break the reflection symmetry in $x$. Existence proofs are not known beyond the normal form approximation, which allows for a family of grain boundaries due to arbitrary relative phase shifts as described above. Our results strongly suggest existence of such grain boundaries for specific relative shifts and indicate some intriguing bifurcations.
Practically, we compute grain boundaries asymptotic to striped patterns $u_\mathrm{s}(k_x^\pm x +q_\pm y;k^\pm)$ for given, fixed $q_\pm\in\mathbb{Z}$, with $k_y$ as our main bifurcation parameter.
Note that the distinction induced by the sign of $q_\pm$ is equivalent to a reflection in $x$. Including the parameter $k_y$, the family of stripes with $q_+$ is connected to the family with $-q_+$ through the horizontal stripes with $k_x=0$.
\paragraph{Dislocations and $(2,1)$ grain boundaries.}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.65\linewidth]{figure20}
\caption{Selected wavenumbers and sample plots for $(3,1)$ grain boundaries. The bifurcation diagram is similar to Figure \ref{f:21GB} with a saddle-node bifurcation. Wavenumbers close to the saddle-node bifurcation \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt}{2}}} differ from the zigzag wavenumber, which is the selected wavenumber for stable and unstable branches away from the saddle-node, \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt}{1}}} and \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt}{3}}}. The bottom sample plot \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt}{4}}} corresponds to $k_y=0.2$. \label{f:31GB}}
\end{figure}
We compute $(2,1)$ grain boundaries using the techniques introduced here. We continue in the parameter $k_y$ which forces an effective change in the relative angle when the wavenumber $k$ is kept fixed, for instance $k=k_\mathrm{zz}$.
Figure~\ref{f:21GB} shows the bifurcation diagram in this case. For small values of $k_y$, the stripes are almost vertical and the slight discrepancy in angle is accommodated by a single dislocation (per vertical period) at the interface. Extending periodically, one sees that the grain boundary is in this way composed of evenly spaced dislocations. Increasing $k_y$ reduces the distance in this spacing and for values of $k_y$ closer to 0.5, the dislocations deform strongly. At $k_y\sim 0.5$, the marginally zigzag stable stripes are horizontal. Fixing $k=k_\mathrm{zz}$, they undergo a saddle-node bifurcation in $k_y$, where the slope of level lines changes sign. We do however \emph{not} see the grain boundaries following this saddle-node bifurcation. We see rather a saddle-node with an induced change of wavenumber in the far field, on both sides of the grain boundary. After the saddle-node, we see what appears to be a phase-mismatched $(2,1)$ grain boundary.
\paragraph{From $(2,1)$ to $(2,-1)$ grain boundaries.}
Plotting bifurcation diagrams against the combined angle as in Figure \ref{f:21GB} (c), one notices that the limiting case of a horizontal stripe on one side is an end point of a branch of grain boundaries. One can continue $(2,-1)$ grain boundaries in a similar fashion and finds again that the branch terminates at the horizontal grain boundaries. In fact, horizontal stripes with zigzag marginally stable wavenumber are also end points of $(1,-1)$ grain boundaries, in the limit of obtuse angle $\phi_c=\pi$. The zigzag instability corresponds to a Hamiltonian pitchfork bifurcation of the horizontal stripes in spatial dynamics \cite{haragus2007}, where heteroclinic orbits bifurcate. While we do not attempt here to analyze the heteroclinic bifurcation resulting from the interplay of this local pitchfork bifurcation and the global heteroclinic connection corresponding to the $(2,\pm1)$ grain boundary, our numerics clearly indicate that $(2,1)$- and $(2,-1)$-branches are not connected. Numerics are difficult since convergence rates in $L_x$ near the bifurcation point are slow (algebraic at $k_y=k_\mathrm{zz}/2$).
\paragraph{Pinning and $(0,1)$ grain boundaries.}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\linewidth]{figure21}
\caption{Continuation of vertical to slanted grain boundaries in $k_y$ (a) blue is with $\phi_-=0$ and gold is with $\phi_-=\pi$; selected wavenumber (b), energy (c), and sample plots of profiles (d). \label{f:vert_2}}
\end{figure}
Grain boundaries that are parallel to the stripes on one side of the grain boundary are somewhat special since a change of $k_y$ does not alter the orientation of the vertical stripes. We show results of our computations in Figure \ref{f:vert_2}. In particular, we did not detect any bifurcations; the selected wavenumber was the zigzag marginally stable wavenumber within numerical accuracy on both sides of the grain boundary. Interestingly, the energy is minimal for stripes perpendicular to the grain boundary. On the other hand, one finds a local minimum for small angles. Computations are somewhat delicate for both small angles and angles $\phi=\pi$ since the marginal zigzag stability induces slow decay towards stripes in these limiting cases.
\paragraph{Grain boundaries involving $q>2$ --- preferred orientations.}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{figure22}
\caption{Asymmetric grain boundary profiles with decreasing energy
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt}{1}}} to \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt}{5}}}
for $q_->0$ and $q_+<0$ and $\phi_\mathrm{c}=1$, and decreasing energy
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {6}}} to \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {9}}} $q_-,q_+>0$ with $\phi_\mathrm{c}=2.5$.
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {1}}} $(q_-,q_+)=(1,-1),k_y=0.48$,
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {2}}} $(q_-,q_+)=(1,-1),k_y=0.48$ on the pitchfork branch,
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {3}}} $(q_-,q_+)=(3,-2),k_y=0.19$,
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {4}}} $(q_-,q_+)=(2,-1),k_y=0.31$,
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {5}}} $(q_-,q_+)=(3,-1),k_y=0.23$,
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {6}}} $(q_-,q_+)=(3, 1),k_y=0.26$,
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {7}}} $(q_-,q_+)=(2, 1),k_y=0.33$,
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {8}}} $(q_-,q_+)=(3, 2),k_y=0.33$,
\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {9}}} $(q_-,q_+)=(1,-1),k_y=0.95$.
\label{f:asym_GBs}}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\linewidth]{figure23}
\caption{Energy of grain boundaries depending on the combined angle $\phi_\mathrm{c}$, for several values of $(q_-,q_+)$. Note that the simplest symmetric $(1,-1)$ grain boundary is energetically preferred for angles $\phi_\mathrm{c}>\pi/2$. Intersection of energy curves $(0,1)$ and $(1,-1)$ occurs at $\phi_c=2.16$.}\label{f:gbe2}
\end{figure}
From a continuation and bifurcation point of view, $(3,1)$ grain boundaries behave in a completely analogous fashion as $(2,1)$ grain boundaries, as illustrated in Figure \ref{f:31GB}. The actual grain boundaries are now composed of two dislocations, conjugate by the parity $u\mapsto -u$, as becomes most apparent in the limit of small $k_y$. We computed more generally $(q_-,q_+)$ grain boundaries and show sample profiles in Figure~\ref{f:asym_GBs}.
In order to determine preferred orientations of grain boundaries, one defines and fixes a combined angle $\phi_\mathrm{c}$,
\[
\phi_\mathrm{c}=\left\{\begin{array}{ll}
\pi - (\phi_++\phi_-), & q_->0>q_+,\\
\phi_-+\phi_+, & q_-,q_+>0;
\end{array}\right.
\]
see also Figure \ref{f:gbsc}. Varying now $q_\pm$ while fixing $\phi_\mathrm{c}$, one attempts to find an orientation of the grain boundary that minimizes the energy per unit interfacial length. Grain boundaries in Figure~\ref{f:asym_GBs} are displayed with decreasing energy, top to bottom. Figure \ref{f:gbe2} shows the energy of grain boundaries depending on the combined angle, for several choices of $(q_-,q_+)$.
We separated the asymmetric grain boundaries into two groups: those with $q_-$ and $q_+$ of opposite and those with $q_-$ and $q_+$ of same sign. In each of the columns in Figure~\ref{f:asym_GBs}, we order the grain boundaries in decreasing order of renormalized energy~(\ref{e:energy_SHr}) for the same combined angle $\phi_\mathrm{c}=1$ and $\phi_\mathrm{c}=2.5$, respectively. It is interesting to note that for the grain boundaries with $q_-$ and $q_+$ of opposite sign, the symmetric grain boundary $(q_-,q_+)=(1,-1)$ is not always the most energetically preferred.
For acute angles, the energy of grain boundaries appears to decrease with the ratio $q_-/q_+$, indicating a tendency of stripes on one side of the boundary to align with the interface. The preferred orientation is then actually the vertical $(1,0)$ grain boundary between slanted and vertical stripes. For obtuse angles, small ratios $q_-/q_+$ appear to be preferred, with the defect-free weakly bent $(1,-1)$ grain boundary having significantly less energy than other grain boundaries. We found a critical angle of $\phi_c^*=2.16$, such that $(1,0)$ grain boundaries are preferred for $\phi_c<\phi_c^*$ and $(1,-1)$ grain boundaries for $\phi_c>\phi_c^*$.
\paragraph{Grain boundaries --- lists from stacking defects.}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{figure24}
\caption{Top Schematic plot of building blocks for grain boundaries in (a)--(h) for asymmetric grain boundaries and (l)--(o) for symmetric grain boundaries. Transitions between building blocks (i)--(k) and (t)--(w), and bound states (p)--(s); see explanations in text.
}\label{f:schematic}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{figure25}
\caption{Building asymmetric grain boundaries from Figure \ref{f:asym_GBs} from the building blocks in Figure \ref{f:schematic}. Grain boundaries \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {3}}}--\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {5}}}, top row, and \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {6}}}--\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {8}}}, bottom row, of type $(3,-2),\,(2,-1),\,(3,-1)$ and $(3,1),\,(2,1),\,(3,2)$, respectively.
}\label{f:stack}
\end{figure}
One clearly notices in Figure \ref{f:asym_GBs} that, particularly for acute angles, grain boundaries can be interpreted as composed of defects such as dislocations, or convex and concave disclinations. In Figure \ref{f:schematic}, we list the basic building blocks. Because of the parity symmetry $u\mapsto -u$, all building blocks come in two versions, $+$ and $-$. The left column shows, beyond the simple ``bend'' $(\frac{1}{2},-\frac{1}{2})^+$ (half a vertical period of an obtuse $(1,-1)$ grain boundary), dislocations and bent dislocations as bound states of disclinations; the second column shows the parity-conjugates. The third column shows a continuous deformation between straight and bent dislocations, as would arise in a transition from $(2,-1)$ to $(2,1)$ grain boundaries. Note however that the bifurcation diagrams in Figures \ref{f:21GB} and \ref{f:31GB} show that these transitions do not actually occur along a branch of grain boundaries. The fourth column shows the elementary disclinations, concave (V) and convex (X) in both parities. The last two columns show bound states and transitions, most notably dislocations as convex-concave bound state $V^+X^-$ and $V^-X^+$, and the annihilation of a convex-concave disclination pair as observed along the parity-shift symmetry broken branch of $(1,-2)$ grain boundaries.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{figure26}
\caption{Anomalous grain boundaries of type $(2,-2)$ and $(4,-2)$ which are \emph{not} simply doubled grain boundaries of type $(1,-1)$ and $(2,-1)$, respectively, obtained from direct simulations of the initial-value problem in a doubly periodic box; $k_y=0.1$, $x\in (-10\pi,10\pi)$, shown is $x\in [-30,30]$.
}\label{f:4222}
\end{figure}
Using the building blocks from Figure \ref{f:schematic}, we can in fact construct formally all grain boundaries computed in Figure \ref{f:asym_GBs} through simple ``stacking''; see Figure \ref{f:stack}.
Similarly, one can construct acute $(1,-1)$ grain boundaries as $V^+X^-|V^-X^+$ stacks and the parity-shift broken branch as $V^+|(\frac{1}{2},-\frac{1}{2})^-|X^+$.
One can now easily predict a variety of ``new'' grain boundaries, such as a $(4,-2)$-grain boundary obtained from stacking $(1\frac{1}{2},-\frac{1}{2})^+| (1\frac{1}{2},-\frac{1}{2})^-|(\frac{1}{2},-\frac{1}{2})^+| (\frac{1}{2},-\frac{1}{2})^-$, or a $(2,-2)$-grain boundary obtained by stacking $V^+|(\frac{1}{2},-\frac{1}{2})^-|X^+|V^+X^-|V^-X^+$. Such grain boundaries can indeed be observed as stable interfaces as demonstrated in Figure \ref{f:4222}.
\paragraph{Energetically preferred grain boundaries.}
Given our results above, one can anticipate energetically preferred shapes of grain boundaries for a given angle. For acute angle grain boundaries, combined angle $\phi_\mathrm{c}<\pi/2$, Figure \ref{f:gbe2} suggests that grain boundaries where stripes on one side are parallel to the interface are energetically preferred. Otherwise, weak symmetric bending $(1,-1)$ grain boundary appear to be preferred. One can rationalize this effect by observing the defects generated at the boundary. The $(1,0)$ grain boundaries can be composed of an $X^-|V^+$ sequence in each period, whereas a $(3,-1)$ grain boundary, say, is composed of a $(1\frac{1}{2},\frac{1}{2})^+|(1\frac{1}{2},\frac{1}{2})^-$ sequence which in turn consists of 4 disclinations $V^+|X^-|V^-|X^+$, suggesting that the larger number of defects leads to a higher interfacial energy. We emphasize however that a simple count of ``defects/unit length'' will generally not give a sharp criterion. For instance, the energy of grain boundaries per unit length increases as the combined angle becomes more acute, although the vertical period increases and hence the number of defects per unit length decreases. One factor here certainly is the fact that disclinations are strongly deformed from their ideal shape as an isolated point defect when angles are acute.
Given two grain orientations and an interface orientation, $\phi_\mathrm{c}<\pi/2$, it is then conceivable that grain boundaries are built from piecewise straight grain boundaries that are parallel to either left or right stripes, interspacing both orientations of the grain boundary such that the resulting prescribed angle is achieved. We suspect that pinning effects in the interaction between defects that build the grain boundary will prevent coarsening of these piecewise straight segments of grain boundaries.
For small differences in the grain orientation, obtuse angles, our results confirm the suspicion that defect-free bending is the energetically preferred mode of accommodating the orientation mismatch.
Finally, Figure \ref{f:vert_2} suggests that out of all the grain boundaries with stripes parallel to the interface, grain boundaries with angle $\pi/2$ are preferred. It is however not clear how, starting with random patches of grain orientations, configurations with only such grain boundaries could emerge.
\subsection{Other grain boundaries}\label{s:other}
We think of the examples shown here as the ``simplest'' grain boundaries for given angles $\phi_\pm$. As we noticed when ``stacking'' defects, grain boundaries can often be thought of as composed of simpler defects such that interaction dynamics are in equilibrium. It appears that these interaction dynamics allow for a multitude of pinning effects, and our goal here is to elucidate some more obvious examples of this.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{figure27}
\caption{Selected wavenumber and energy of symmetric grain boundaries with a core of vertical stripes (top). Sample plots shown below \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {1}}}--\protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {2}}}. The branch appears to terminate at $\phi_\mathrm{c}\sim 2.14$ at a bifurcation point (eigenfunction shown in \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {3}}}). Also shown \protect\raisebox{.5pt}{\textcircled{\protect\raisebox{-.9pt} {4}}}) a sample plot of a $(1,1)$ ``grain boundary'' with vertical core.
}\label{f:vcore}
\end{figure}
\paragraph{Symmetric grain boundaries with vertical stripe core.}
In the simplest case of weak bending, when grain boundaries can be described through phase modulation equations near the zigzag instability, the Cahn-Hilliard equation, stationary profiles include, in addition to the ``heteroclinic'' grain boundaries also homoclinic orbits, which can be viewed as concatenations of two heteroclinic orbits. Those ``step''-like double-knees are unstable in the Cahn-Hilliard modulation approximation due to possible coarsening. More importantly, they do not connect the energy-minimizing marginally zigzag stable stripes.
In Figure \ref{f:vcore}, we show $(1,-1)$ grain boundaries that contain a core of vertical stripes. They can be thought of as concatenations of $(1,0)$ and $(0,-1)$ grain boundaries, respectively. It is interesting to notice that again the asymptotic wavenumber does not appear to depend on the angle and is, within numerical accuracy, the zigzag marginally stable one. The energy is however larger than the energy of the simpler $(1,-1)$ grain boundaries that we computed before. Curiously these grain boundaries do not appear to continue to the weak bending regime. We also note that one might expect the interaction between grain boundaries to be exponentially decaying in the distance, such that any deviations from the zigzag wavenumber caused by the interaction could be beyond numerical resolution. We also computed $(1,1)$ grain boundaries resulting from a concatenation of $(1,0)$ and $(0,1)$ grain boundaries, resulting in a homoclinic type solution, with equal grain orientation on on both sides.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{figure28}
\caption{Snaking symmetric grain boundaries for a modified quadratic-cubic nonlinearity, $f(u) = u^2 - u^3$; $k_y=0.85$.}\label{f:hex}
\end{figure}
\paragraph{Snaking grain boundaries.}
One would expect stronger pinning effects when hexagonal spot patterns are involved. We confirm this in a continuation computation involving a quadratic-cubic nonlinearity, which allows for hexagonal patterns. The results of a sample computation are shown in Figure \ref{f:hex}. For small $\mu$, hexagons nucleate at the tip of the ``knee'', where stripes are most unstable, and cause a saddle-node bifurcation. We expect there to be a plethora of grain boundaries for small $\mu$. We note that grain boundaries involving hexagonal patterns had also been discussed in \cite{malomed1990}. Since analysis there was performed within the amplitude equation framework, snaking and pinning aspects, in particular for moderate values of $\mu$ are not analyzed there.
\section{Discussion}\label{s:6}
We presented a robust framework for the computation of grain boundaries from a path-following perspective and explored grain boundaries in the Swift-Hohenberg equation within this framework. The path-following perspective shows that complexity of grain boundaries increases for acute angles, with the creation of defects and pinning effects in their interaction. For obtuse angles, weak, defect-free, reflection-symmetric bending is the energetically preferred interface structure. For acute angles, disclinations and dislocations form at interfaces and create a wealth of structures. Asymmetric grain boundaries with stripes parallel to the grain boundary on one side appear to be energetically preferred in this case and we anticipate zigzag patterns in the shape of the actual grain boundary interface for minimum energy interfaces. While we outline a system for cataloging such grain boundaries, we did not attempt an exhaustive description. In all regimes, our analysis raises a number of interesting questions, both from an analytic and a computational perspective.
\paragraph{Stability.}
Our stability analysis here is somewhat rudimentary, relying on a simple eigenvalue computation in the bounded domain. Since the essential spectrum touched the imaginary axis, it would be more appropriate to track eigenvalues using some variant of the Evans function and its extension into the essential spectrum, allowing for a precise tracking of eigenvalues near the origin and how they turn into resonance poles upon crossing into the essential spectrum. We expect that such a computation could be accomplished using a decomposition quite analogous to our far-field-core decomposition here; see for instance the analysis in \cite{poganscheel,fayescheel,ssmorse}. Analytically, one may wish to start studying stability of small-amplitude grain boundaries, at least up to the neutral eigenvalue corresponding to phase matching and non-adiabatic effects beyond the normal form, possibly first on the spectral level, preparing for a nonlinear stability proof.
Since the assumption on $y$-periodicity is technical, one would also want to study stability with $y\in\mathbb{R}$. On the spectral level, this would add a Bloch-wave parameter $\sigma$ accounting for $y$-modulations. Even theoretically, it is not clear how zeros of the Evans function would behave for long-wavelength modulations, $\sigma\sim 0$, in particular, if one can associate a bending stiffness $d$, $\lambda\sim -d\sigma^2$ to grain boundaries.
\paragraph{Bifurcations.}
Our computations point to a number of interesting bifurcations. We mention here in particular bifurcations involving horizontal stripes, which, due to the marginal zigzag instability are neutrally stable with a length 4 Jordan block at the origin, bifurcations at the core such as the parity-shift breaking and the snaking bifurcations, as well as gluing bifurcations that produce grain boundaries with striped cores as in Figure \ref{f:vcore}. It would also be interesting to explore the effect of non-variational terms, such as the shift of the wavenumber away from marginal zigzag stability. Phenomenologically, it would be interesting to find parameter regions where grain boundaries select zigzag unstable stripes, thus inducing cascades of bending analogous to the cascades of spiral waves generated by far-field breakup instabilities \cite{ssbreakup}.
\paragraph{Point defects.}
In light of the complexity of bifurcation diagrams for $(p,q)$ grain boundaries when $p,q$ are not necessarily small, and for acute angles, one would wish to describe grain boundaries in terms of point defects and their interaction properties. One therefore would like to implement analogous far-field-core decompositions for point defects. Preliminary theoretical results in this direction have been obtained in \cite{jaramillo13,jsw16}, providing in particular an implicit function theorem near stripes in the presence of localized inhomogeneities and hinting at systematic multipole approximations for the phase in the far-field.
\paragraph{Beyond stripes and Swift-Hohenberg.}
While our computations address only one specific parameter value in one specific equation, one could hope for some wider-ranging implications. For this, it would be interesting to perform more extensive comparisons with modifications of Swift-Hohenberg, including parity-breaking and non-variational terms. Beyond the prototypical Swift-Hohenberg equation, one could investigate systems with striped phases other than Swift-Hohenberg, such as polymer, phase separation, or reaction-diffusion systems \cite{gbpolymer,phasefield,turinggb}.
On the other hand, grain boundaries have been extensively studied in nonlinear elasticity material models, although commonly not from the point of view taken here, using path following and idealizations to infinite domains, and resolving the fine crystalline structure. As pointed out above, one would expect pinning effects to be more complex in hexagonal lattices, leading to complicated snaking phase diagrams; see for example~\cite{sutton1995} for a general exposition on the interfaces in crystalline matter.
In this context, but also in the Swift-Hohenberg example, it would be interesting to study the response of grain boundaries to inhomogeneities and external forces. Localized inhomogeneities or impurities could be readily incorporated into our framework as $(x,y)$-dependent forcing terms, which would break translational symmetry. As a result, one would not expect Goldstone modes in the kernel and cokernel, changing the parameter counts from Section \ref{s:1}. Even in the absence of grain boundaries, inhomogeneities induce phase shifts and, in some cases, wavenumber shifts \cite{jsw16}. One might expect such effects also for grain boundaries, at least in the weak bending regime of $(1,-1)$ grain boundaries. Similarly, external forces could be modelled as boundary conditions at finite $x=\pm L$, that select wavenumbers different from the energy minimizing $k_\mathrm{zz}$ \cite{morrissey2015}. In the absence of grain boundaries, incompatible imposed strains induce a drift of stripes in the direction of the gradient of the strain, as can be seen in the scalar phase diffusion equation
\[
\Theta_t=\Theta_{xx},\ x\in(-L,L),\qquad \Theta_x\big|_{x=\pm L}=\pm 1,\qquad \Theta(t,x)=(2L)^{-1}\left(x^2 -2t\right).
\]
One may expect such an induced drift for grain boundaries, at least in the weak bending regime, since grain boundaries themselves can be thought of as strain (or wavenumber) selecting, imposing thus an effective, possibly incompatible Neumann boundary condition at $x=0$.
\section{Appendix}
We describe in more technical detail the far-field-core decomposition that is key to our numerical continuation procedure. We start by assuming that there exists a grain boundary $u_*$ with asymptotic stripes with wavenumbers $k^\pm$.
\subsection{The linearization at a grain boundary --- Fredholm properties}
Linearizing (\ref{e:gbr}) at a grain boundary, we obtain the elliptic operator
\[
\mathcal{L}_* u=-(\partial_x^2+k_y^2\partial_y^2+1)^2 u + \mu u - 3u_*^2u, \qquad (x,y)\in \mathbb{R}\times (0,2\pi),
\]
equipped with periodic boundary conditions in $y$. We typically think of $\mathcal{L}_*$ as a closed, densely defined operator on $L^2$, with domain $H^4$. Using Weyl sequence arguments, one readily finds that $\mathcal{L}_*$ is not Fredholm due to the presence of non-localized elements of the kernel, $\partial_x u_*$ and $\partial_y u_*$.
The presence of continuous spectrum of $\mathcal{L}_*$ at $\lambda=0$ reflects the numerically observed slow, diffusive adjustment of stripes to perturbations. Focusing on the simple coherent structure rather than the plethora of dynamics nearby, we choose exponential weights that allow us to directly separate far-field behavior from the core of the grain boundary. Consider therefore exponentially weighted spaces $H^k_\eta(\mathbb{R}\times (0,2\pi))$, with norm
\[
\|u(x,y)\|_{H^k_\eta}=\|u(x,y)\mathrm{e}^{\eta \langle x\rangle}\|_{H^k},
\]
where $\langle x\rangle=\sqrt{x^2+1}$.
We will see next that, typically, $\mathcal{L}_*$ is Fredholm on $L^2_\eta$ for $\eta>0$, sufficiently small. Therefore, define the asymptotic operators $\mathcal{L}_\pm$, where $u_*$ in the definition of $\mathcal{L}_*$ is replaced by $u_\pm(x,y):=u_\mathrm{s}^\pm(k_x^\pm x+k_y^\pm y;k^\pm)$. Also, define the exponentially weighted spaces $H^k_{\eta,>}$ via
\[
\|u(x,y)\|_{H^k_{\eta,>}}=\|u(x,y)\mathrm{e}^{\eta x}\|_{H^k}.
\]
\begin{Proposition}
The operator $\mathcal{L}_*$ is Fredholm on $L^2_\eta$ if and only if $\mathcal{L}_\pm$ are invertible in $L^2_{\pm\eta,>}$.
\end{Proposition}
\begin{Proof}
The proof is a direct application of the closed range lemma; see for instance \cite{robbin1995}.
\end{Proof}
In order to better understand the operators $\mathcal{L}_\pm$, we introduce Bloch waves. Consider
\[
\hat{\mathcal{L}}_\pm(\nu)=-(\partial_y^2+(\partial_x+\nu)^2+1)^2+\mu-3u_\mathrm{s}^\pm,
\]
with periodic boundary conditions on $(0,2\pi/k_x)\times (0,2\pi/k_y)$. Classical Bloch wave theory \cite{reed1980} says that
\[
\mathrm{spec}_{L^2_{\eta,>}}\,(\mathcal{L}_\pm)=\bigcup_{\nu\in -\eta+\mathrm{i}[0,k_x)}\mathrm{spec}_{L^2_\mathrm{per}}\,(\hat{\mathcal{L}}_\pm(\nu)).
\]
In order to understand the spectrum of $\hat{\mathcal{L}}_\pm(\nu)$, we relate to the linearization at a ``straight'', non-rotated stripe, $u_\mathrm{s}(k^\pm x)$.
Consider therefore the Floquet-Bloch linearization
\[
\hat{\mathcal{L}}(\nu_x,\nu_y)=-(\nu_y^2+(\partial_x+\nu_x)^2+1)^2+\mu-3u_\mathrm{s}^\pm(k x;k).
\]
Within the stability region $k\in(k_\mathrm{zz},k_\mathrm{eck})$, and for $\mathop{\mathrm{Re}}\nu_x,\mathop{\mathrm{Re}}\nu_y$ small, the spectrum of $\hat{\mathcal{L}}(\nu_x,\nu_y)$ is strictly negative, bounded away from the origin, except for a simple eigenvalue $\lambda$ close to the origin when $\eta_x,\eta_y\sim 0$, with expansions
\[
\lambda(\nu_x,\nu_y)=d_\parallel \nu_x^2+d_\perp \nu_y^2+\mathrm{O}(|\nu_x|^4+|\nu_y|^4),
\]
with positive constants $d_\parallel$ and $d_\perp$. At $k=k_\mathrm{zz}$, we find
\[
\lambda(\nu_x,\nu_y)=d_\parallel \nu_x^2-d_\perp \nu_y^4+\mathrm{O}(|\nu_x|^4+|\nu_y|^6);
\]
see for instance \cite{cross1993,mielke1997}.
\begin{Lemma}
For $k\in [k_\mathrm{zz},k_\mathrm{eck}]$ and arbitrary $k_x<k$, the operators $\mathcal{L}_\pm$ are invertible in $L^2_{\pm\eta,>}$ for $\eta>0$, sufficiently small.
\end{Lemma}
\begin{Proof}
Using Fourier-Bloch decomposition,
\[
u(x,y)=\sum_{m\in\mathbb{Z}} u_m(k_x x + k_y y;k)\mathrm{e}^{\mathrm{i} m k_y y},
\]
the operator $\mathcal{L}_\pm$ diagonalizes over $m$, with diagonal entries
\[
\hat{\mathcal{L}}_\pm(\nu)=-\left((k_y\partial_\xi+\mathrm{i} m k_y)^2+(k_x\partial_\xi+\nu)^2+1\right)^2+\mu-3u_\mathrm{s}^\pm,
\]
with $2\pi$-periodic boundary conditions in $\xi$. These operators are equal to $\hat{\mathcal{L}}(\nu_x,\nu_y)$ when choosing
\begin{align*}
\nu_x&=(\nu k_x + \mathrm{i} m k_y^2)/k,\\
\nu_y^2&=\nu^2-m^2k_y^2-\nu_x^2,
\end{align*}
where, of course, $\nu_x,\nu_y$ may be complex. We see that for $\nu$ small and $m\neq 0$, $\nu_x$ is not small so that the critical eigenvalue $\lambda(\nu_x,\nu_y)$ does not vanish. For $m=0$,
\[
\nu_x^2=\nu^2\frac{k_x^2}{k_x^2+k_y^2},\qquad
\nu_y^2=\nu^2\frac{k_y^2}{k_x^2+k_y^2},
\]
so that
\begin{equation}\label{e:qu}
\lambda\sim \nu^2
\end{equation}
for small $\nu$ within the stability region.
\end{Proof}
We remark that the assumptions in the lemma are not sharp. Invertibility follows when the rotated stripes are marginally stable with $y$-periodic bouundary conditions. We next proceed to determine the Fredholm index of the linearization.
\begin{Lemma}
For $\eta>0$ sufficiently small, the Fredholm index of the linearization at a grain boundary is $-2$ in $L^2_\eta$ and it is $+2$ in $L^2_{-\eta}$, provided that the asymptotic stripes are marginally stable ($k_x\neq 0$ for $k=k_\mathrm{zz}$, and $k_y\neq 0$ for $k=k_\mathrm{eck}$).
\end{Lemma}
\begin{Proof}
Suppose first that $\eta>0$. The Fredholm index can be computed by counting the signed crossings of multipliers through the origin during a homotopy from $\hat{\mathcal{L}}_+(\nu)$ to $\hat{\mathcal{L}}_-(-\nu)$. From the preceding Lemma, we can homotope between $\hat{\mathcal{L}}_+(\nu)$ and $\hat{\mathcal{L}}_-(\nu)$ without crossings. Homotoping from $\nu$ to $-\nu$, we see precisely the double zero multiplier from \eqref{e:qu} cross the origin, which readily gives the desired result on the Fredholm index. Since the linearization $\mathcal{L}_*$ is self-adjoint in $L^2$, $\mathcal{L}_*$ is Fredholm of index 2 in $L^2_{-\eta}$ for $\eta>0$, small.
\end{Proof}
Similar results have been proven in \cite{sandstede2004}, where spatial dynamics rather than spectral flow arguments were employed. Also, the discussion there centers around the case of stripes with non-vanishing group velocities, when the dispersion relation \eqref{e:qu} contains a linear term in $\nu$ and crossings are simple. The present case is most similar to the case of contact defects discussed there.
\subsection{Transverse grain boundaries}
The translational modes $\partial_xu_*$ and $\partial_y u_*$ span a two-dimensional subspace of the kernel of $\mathcal{L}_*$ in $L^2_{-\eta}$, $\eta>0$ (note that both are linearly independent since otherwise the grain boundary would be a one-dimensional pattern, consisting of a simple stripe). On the other hand, since asymptotic wave vectors $\underline{k}^\pm$ are different, we cannot find a linear combination of $\partial_xu_*$ and $\partial_y u_*$ that is exponentially localized.
\begin{Hypothesis}[Transverse GB]\label{h:tgb}
Assume that the kernel of $\mathcal{L}_*$ in $L^2_{-\eta}$, $\eta>0$ sufficiently small, is two-dimensional.
\end{Hypothesis}
Note that, as a consequence, the kernel of $\mathcal{L}_*$ in $L^2_\eta$, $\eta>0$, is trivial, and the cokernel is spanned by $\partial_xu_*$ and $\partial_y u_*$.
We emphasize that the grain boundaries found in the truncated normal form near onset, $\mu\sim 0$, are not transverse. The additional normal form symmetry leads to an element in the kernel of $\mathcal{L}_*$ in $L^2_{\eta}$. In our numerical computations, here, we found that this hypothesis is typically satisfied as one might expect.
\subsection{Far-field matching and robustness}
Since the linearization in exponentially weighted spaces is Fredholm, we would like to employ the implicit function theorem in order to continue grain boundaries in parameters. For negative weights, the linearization is onto but the nonlinearity is not defined. For positive weights, the negative index indicates that additional free variables are necessary to solve. These are naturally given through wavenumbers and phase shifts in the far field. We exploit those with an ansatz
\[
u(x,y)=w(x,y)+\chi_+(x)u_\mathrm{s}^+(x,y)+\chi_-(x)u_\mathrm{s}^-(x,y),\quad u_\mathrm{s}^\pm(x,y)=
u_\mathrm{s}(k^\pm_xx+q_\pm y+\varphi^\pm;k^\pm),
\]
where, the smooth cut-off functions $\chi_\pm$ satisfy
\[
\chi_\pm(x)=1,\ \pm x>d+1,\qquad
\chi_\pm(x)=0,\ \pm x<d,
\]
for some (arbitrary) $d>0$. Here, $w\in H^4_\eta$, $k^\pm_x$, and $\varphi_\pm$ are free variables. The asymptotic wavenumbers $k^\pm$ satisfy
\[
(k^\pm)^2=(k_x^\pm)^2+(q_\pm k_y)^2,
\]
where $k_y$ is a free parameter and $q_\pm\in\mathbb{Z}$ are fixed integers.
Substituting this ansatz into the stationary Swift-Hohenberg equation gives
\begin{equation}\label{e:shm0}
\mathcal{L}\left(w+\sum_\pm \chi_\pm u_\mathrm{s}^\pm\right)-\left(w+\sum_\pm \chi_\pm u_\mathrm{s}^\pm\right)^3=0,
\end{equation}
or, equivalently, after subtracting the equations for $u_\mathrm{s}^\pm$,
\begin{equation}\label{e:shm}
\mathcal{L}w+\sum_\pm\left[\mathcal{L},\chi_\pm\right]u_\mathrm{s}^\pm
- \left[\left(w+\sum_\pm \chi_\pm u_\mathrm{s}^\pm\right)^3-\left(\sum_\pm \chi_\pm u_\mathrm{s}^\pm\right)^3\right]
+\left[\sum_\pm\chi_\pm \left(u_\mathrm{s}^\pm\right)^3-\left(\sum_\pm\chi_\pm u_\pm\right)^3 \right]=0,
\end{equation}
where
\[
\mathcal{L}=-(\partial_x^2+k_y^2\partial_y^2+1)^2+\mu,\quad \mbox{and}\; \ \ [A,B]u=A(Bu)-B(Au).
\]
We consider the left-hand side of \eqref{e:shm} as a (locally defined) nonlinear operator
\[
F_w:H^4_\eta \times \mathbb{R}^6\to L^2_\eta, \quad (w,k_x^-,k_x^+,k_y,\varphi^-,\varphi^+)\to F_w(w,k_x^-,k_x^+,k_y,\varphi^-,\varphi^+).
\]
Note that $F_w$ is well defined since terms not involving $w$ are given by commutators between cut-off and differential operators and nonlinearities, hence compactly supported, as one can easily check by setting $\chi_+=1,\chi_-=0$ in \eqref{e:shm}.
Moreover, $F_w$ is readily seen to be a smooth function and the derivative with respect to $w$ at a grain boundary $u_*=w+\sum_\pm\chi_\pm u_\mathrm{s}^\pm$ is the linearization we discussed before,
\[
\partial_wF_w=\mathcal{L}_*,
\]
so that $DF_w=(\partial_wF_w,\partial_{k_x^\pm,k_y,\varphi^\pm}F_w)$ is Fredholm of index 3 by Fredholm bordering theory.
\begin{Lemma}\label{l:onto}
The linearization $D_{w,k_x^\pm}F_w$ at a transverse grain boundary is invertible. The derivatives $D_{\varphi^\pm}F_w$ belong to the range of $D_wF_w$.
\end{Lemma}
\begin{Proof}
First notice that solutions of $F_w=0$ come in families induced by translations in $x,y$,
\begin{align*}
u_*(x+\tau_x,y+\tau_y)&=\tilde{w}(x,y)+\sum_\pm\chi_\pm(x)u_\mathrm{s}^\pm(k^\pm_x x + k^\pm_y y + \tilde{\varphi}^\pm),\\
\tilde{w}(x,y)&=w(x+\tau_x,y+\tau_y)+\sum_\pm \left(\chi_\pm(x+\tau_x)-\chi_\pm(x)\right)u_\mathrm{s}^\pm(k^\pm_x x + k^\pm_y y+\tilde{\varphi}^\pm),\\
\tilde{\varphi}^\pm&=\varphi^\pm+k_x^\pm\tau_x+k_y^\pm\tau_y.
\end{align*}
Differentiating with respect to $\tau_{x/y}$ gives
\[
(w,k^-,k^+,\varphi^-,\varphi^+)=(\partial_x w+\sum_\pm\chi_\pm'u_\mathrm{s}^\pm,0,0,k_x^+,k_x^-),\qquad
(w,k^-,k^+,\varphi^-,\varphi^+)=(\partial_y w,0,0,k_y^+,k_y^-).
\]
As a consequence, using that $(k_x^+,k_y^+)$ and $(k_x^-,k_y^-)$ are linearly independent, we find that $\partial_{\varphi^\pm}F_w\in\mathrm{Rg}\partial_wF_w$.
We proceed to show that $\partial_{k_x^\pm}F_w$ span the cokernel of $\partial_wF_w$. Suppose this was not the case. Then there would exist $\alpha_\pm\in\mathbb{R}$, so that $\sum_\pm \alpha_\pm\partial_{k_x^\pm}F_w=\mathcal{L}_*w_0$, for some $w_0\in H^4_\eta$. Since $\partial_{k_x^\pm}F_w=\mathcal{L}_*w_k^\pm$, with $w_k^\pm=\chi_\pm\frac{\mathrm{d}}{\mathrm{d} k_x^\pm}u_\mathrm{s}^\pm$, we conclude that
$\mathcal{L}_*(w_0+\sum_\pm\alpha_\pm w_k^\pm)=0$. However, $\sum_\pm\alpha_\pm w_k^\pm\in H^4_{-\eta}$, with linear growth, and $\sum_\pm\alpha_\pm w_k^\pm\not\in H^4_{\eta}$ for $(\alpha_+,\alpha_-)\neq 0$, since the supports of $w_k^\pm$ are disjoint, so that $w_0+\sum_\pm\alpha_\pm w_k^\pm\neq 0$. We have thus found an element in the kernel of $\mathcal{L}_*$ in $H^4_{-\eta}$, in contradiction to the transversality assumption.
\end{Proof}
As an immediate consequence, we can use the implicit function theorem to conclude robustness of grain boundaries.
\begin{Corollary}\label{c:t}
Transverse grain boundaries come in families, parameterized by $k_y$. Such families persist under small perturbations of the equation, such as variations of the parameter $\mu$.
\end{Corollary}
The choice of $k_y$ as parameter and $k_x^\pm$ as variables is somewhat arbitrary. Modifying Lemma \ref{l:onto}, one could also choose combinations such as $k_y,k_x^+$ as variables.
\begin{Remark}[Symmetric grain boundaries]\label{r:s}
Analytic existence results are only available for symmetric grain boundaries, $u_*(x,y)=u_*(-x,y)$, or $u_*(x,y)=-u_*(-x,y)$, where of course $q_\pm=\pm 1$. In this case, one can restrict to even (or odd) functions, $H^4_\mathrm{even}$ and finds that $\mathcal{L}_*$ is Fredholm of index $\pm 1$ in $H^4_{\mp \eta}$. Using $k_x^+=k_x^-=:k_x$ as an additional variable, we can then continue transverse grain boundaries in $k_y$. Alternatively, one can write $k_x=mk_y$ and consider the grain orientation $m$ as parameter and the wavenumber $k=\sqrt{1+m^2}k_y$ as variable. Again, following the proof of Lemma \ref{l:onto}, one finds that the linearization is onto so that transverse grain boundaries can be characterized by a selected wavenumber as a function of the grain orientation $k=k(m)$.
\end{Remark}
\subsection{Approximating grain boundaries in finite intervals --- theory}
\label{s:3}
The previous considerations promote a view of an isolated grain boundary as a coherent structure in an idealized infinite system. In particular, one would like to compute grain boundaries and transmission relations such as $(k_x^+,k_x^-)$ as functions of $k_y$ as in Corollary \ref{c:t}, or selected wavenumbers $k(m)$ as a function of grain orientation as in Remark \ref{r:s}. Our point of view is to assume the existence of a grain boundary and compute with error bounds in finite domains $\Omega_{L_x}=(x,y)\in (-L_x,L_x)\times (0,2\pi)$. We will therefore construct a suitable approximation to $F_w$ in such finite domains.
Consider therefore
\begin{align}\label{e:shmL}
F_w^{L_x}&(w,k_x^\pm,k_y,\varphi^\pm)\nonumber\\
=&\mathcal{L}w+\sum_\pm\left[\mathcal{L},\chi_\pm\right]u_\mathrm{s}^\pm
- \left[\left(w+\sum_\pm \chi_\pm u_\mathrm{s}^\pm\right)^3-\left(\sum_\pm \chi_\pm u_\mathrm{s}^\pm\right)^3\right]
+\left[\sum_\pm\chi_\pm \left(u_\mathrm{s}^\pm\right)^3-\left(\sum_\pm\chi_\pm u_\mathrm{s}^\pm\right)^3 \right],
\end{align}
on $w\in H^4(\Omega_{L_x})$, with periodic boundary conditions in $y$ and boundary conditions at $x=\pm L_x$ to be specified later. With standard elliptic boundary conditions, say Dirichlet $w=w_{xx}=0$ at $x=\pm L_x$, the linearization $\partial_wF_w^{L_x}$ is a Fredholm operator of index 0 by standard elliptic regularity. This linear operator is, however, very ill-conditioned, with norms for the inverse growing as $L^2$ at best; see for instance \cite{ssabs} and our discussion in Section \ref{s:1}. The view point introduced in the preceding section will prove more effective, also in setting of finite but large domains.
Consider therefore boundary conditions $\mathcal{B}_\pm(w,w_x,w_{xx},w_{xxx})=0$ at $x=\pm L_x$, for all $y\in [0,2\pi)$, such that $-\Delta^2$ is Fredholm of index 0 when equipped with periodic boundary conditions in $y$ and $\mathcal{B}_\pm$. Examples are Dirichlet, Neumann, or mixed boundary conditions for the Laplacian, or clamped ($w=w_x=0$) boundary conditions. In addition, consider phase conditions
\begin{equation}\label{e:phc}
p_\pm w=\int_{x_\pm}^{x_\pm+2\pi/k_x^\pm} \psi_\pm(x)w(x)\mathrm{d} x,
\end{equation}
with suitable choice of $\psi_\pm$ and $x_\pm$. Then, by simple Fredholm bordering and perturbation theory, the linear operator $(\partial_{w,k_x^\pm}F_w^{L_x},p_\pm w):H^4_\mathrm{bc}\times \mathbb{R}^2\to L^2\times \mathbb{R}^2$ is Fredholm of index 0. The next hypothesis is necessary for stability and convergence of the decomposition.
\begin{Hypothesis}[Transverse boundary conditions]\label{h:bc}
We assume that the linearization at the asymptotic stripe pattern $\mathcal{L}_+$ , posed on $(-\infty,x_*)\times (0,2\pi)$, equipped with boundary conditions $\mathcal{B}_+$ at $x=x_*$ and phase condition $p_+$ does not possess a kernel in $H^4_\eta$ for any $x_*\in[0,2\pi/k_x^+)$. We also require that $\mathcal{L}_-$ , posed on $(x_*,\infty)\times (0,2\pi)$, equipped with boundary conditions $\mathcal{B}_-$ and phase condition $p_-$ does not possess a kernel in $H^4_\eta$ for any $x_*\in[0,2\pi/k_x^-)$.
\end{Hypothesis}
These assumptions, Hypotheses \ref{h:tgb} and \ref{h:bc}, put us in a situation analogous to \cite[Proposition 2.11]{morrissey2015}, where bifurcation diagrams in bounded domains were predicted up to exponentially small errors in the domain size $L_x$. Following the strategy of proof there should therefore yield exponential convergence of the solution $(w,k^\pm)$ of the truncated boundary-value problem \eqref{e:shmL} to the actual grain boundary
\[
\left|w_{L_x}-w_*\right|_{H^4}+\left|k_{x,L_x}^\pm-k_{x,*}^\pm\right|=\mathrm{O}\left(\mathrm{e}^{-\delta L_x}\right).
\]
Note that in the language of \cite{morrissey2015}, grain boundaries are purely wavenumber selecting, yielding trivial strain-displacement relations $k=k_*$, $\varphi\in S^1$, and the additional phase condition imposed here selects one representative from the family of admissible solutions.
Lack of transversality manifests itself in the appearance of boundary layers, that is, $w$ ceases to be uniformly exponentially localized as $L_x\to\infty$. A similar nonlinear statement gives exponential bounds on the truncation error.
In practice, we will choose Dirichlet boundary conditions for $w$ and phase conditions with $\psi_\pm=(u_\mathrm{s}^\pm)'$, effectively eliminating the incorporation of phase shifts and wavenumber corrections into the corrector $w$. Numerical observations described in the paper indicate that this choice does indeed yield transverse boundary conditions. We did notice failure of transversality when choosing Neumann boundary conditions, for acute angles between stripes.
\bibliographystyle{abbrv}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,803
|
Мега Мол може да се отнася за:
Мега Мол (София) – мол в София.
Мега Мол (Русе) – мол в Русе.
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 135
|
Blogs for April 11th, 2022
Taste & Travel
Rossiya Airlines to Launch International Flight Program from Sochi
Rossiya Airlines (Aeroflot Group) has launched a large-scale flight program from Sochi International Airport. From April 7, flights will be operated as part of the regular schedule to the cities of Armenia, Egypt, Israel, Turkey
Munich Airport Offers Audio Walk
Munich Airport is now offering a varied audio walk through Terminal 2 to help families with children shorten their waiting time at the airport. The audio walk through the Schengen area offers interesting insights into
First Cruise Ship Set to Return to Canadian Cruising
Holland America Line will be the first cruise line to return to Canadian cruising following a more than two year industrywide pause due to the global COVID-19 pandemic. Koningsdam will call at Victoria, British Columbia,
IHG Statement on Russia
We continue to be deeply saddened by the humanitarian crisis as a result of the war in Ukraine. We have previously announced the suspension of future investments, development activity and new hotel openings in Russia.
RIU Opens Its First Hotel in Western Africa
This year, the RIU chain has set out on a new and exciting adventure into the heart of Africa with the opening of a new 5-star hotel — the Riu Baobab — in Senegal. Travellers
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,623
|
Coalville school gets inadequate Ofsted rating as 'some pupils feel unsafe'
Friday, December 3rd, 2021 12:05pm
The school was given a good rating in its last inspection.
The Castle Rock School in Coalville was visited by Ofsted in October and was found to be inadequate in all five areas.
The report said that the "pupils are not getting a good deal at this school. Too many are affected by the poor behaviour of others. Lessons are frequently disrupted. Behaviour on corridors and in external areas can be unruly".
It continues to say that "some pupils experience derogatory name-calling".
Julia Patrick, Executive Headteacher at the Apollo Partnership Trust which the school is part of, said: "We are working very hard to address the issues raised in the report.
"Our school is at the heart of our community and we want to assure parents, pupils, staff and residents that while the inspection report is disappointing, we are committed to driving forward improvements, making the changes needed and providing the best learning environment for our students.
"We recognise there have been considerable changes at the school, which have taken place during the pandemic, but accept that work needs to be done
"We have already started to put measures in place to address punctuality and behaviour, including a new behaviour policy, and hopefully parents and pupils are already seeing a difference."
Ofsted recognised that the school had been through a period of significant turbulence and that the "safeguarding leaders work well together and take appropriate action in response to child protection concerns. The most vulnerable pupils are well supported. Staff raise concerns quickly and leaders respond appropriately."
Brand new veteran wellbeing hub to be set up after successful bid for £60,000
The project will provide a safe space for veterans to get advice, information as well as activities.
Body found in search for missing Loughborough man
He was last seen on Sunday morning and the police were extremely concerned for his welfare.
Military called in to support East Midlands Ambulance Service
There is continued demand for ambulances and a number of colleagues are unwell or self-isolating.
Could you help make Leicestershire greener by becoming a tree warden?
There's a need for volunteers in Hinckley and villages such as East Goscote in Charnwood.
Victims of romance fraud are losing thousands of pounds as they fall in love with their scammer
Leicestershire Police are warning people about the scam.
UK's largest 'Sea Dragon' discovered at Rutland Water Nature Reserve
This is a huge palaeontological discovery.
Meet The Apprentice candidate who studied at Loughborough University
She started her sustainable water bottle business in 2020.
Chilean Flamingo chick born at Twycross Zoo for the first time in almost 30 years
The zoo announced its birth on National Bird Day.
Available on the Amazon Appstore
While You're Still Young
Sophie Ellis Bextor
Kungs & Cookin' On 3 Burners
Sam Feldt & Rani
On Air Now and Next
Up late or up early, we've got Just Great Music interruption-free.
The Mark Rowley Breakfast Show
With The Early Bird Quiz before 7, birthdays after 8 and the Test of Time after 9.
Follow @fosse107
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,794
|
{"url":"https:\/\/pure.mpg.de\/pubman\/faces\/ViewItemOverviewPage.jsp?itemId=item_1833358","text":"English\n\n# Item\n\nITEM ACTIONSEXPORT\n\nReleased\n\nReport\n\n#### Exact ground states of Ising spin classes: new experimental results with a branch and cut algorithm\n\n##### MPS-Authors\n\/persons\/resource\/persons45092\n\nMutzel,\u00a0 Petra\nAlgorithms and Complexity, MPI for Informatics, Max Planck Society;\n\n##### External Resource\nNo external resources are shared\n##### Fulltext (public)\n\nMPI-I-95-1-004.pdf\n(Any fulltext), 130KB\n\n##### Supplementary Material (public)\nThere is no public supplementary material available\n##### Citation\n\nDiehl, M., De Simone, C., J\u00fcnger, M., Mutzel, P., Reinelt, G., & Rinaldi, G.(1995). Exact ground states of Ising spin classes: new experimental results with a branch and cut algorithm (MPI-I-1995-1-004). Saarbr\u00fccken: Max-Planck-Institut f\u00fcr Informatik.\n\nCite as: http:\/\/hdl.handle.net\/11858\/00-001M-0000-0014-A765-7\n##### Abstract\nIn this paper we study 2-dimensional Ising spin glasses on a grid with nearest neighbor and periodic boundary interactions, based on a Gaussian bond distribution, and an exterior magnetic field. We show how using a technique called branch and cut, the exact ground states of grids of sizes up to $100\\times 100$ can be determined in a moderate amount of computation time, and we report on extensive computational tests. With our method we produce results based on more than $20\\,000$ experiments on the properties of spin glasses whose errors depend only on the assumptions on the model and not on the computational process. This feature is a clear advantage of the method over other more popular ways to compute the ground state, like Monte Carlo simulation including simulated annealing, evolutionary, and genetic algorithms, that provide only approximate ground states with a degree of accuracy that cannot be determined a priori. Our ground state energy estimation at zero field is~$-1.317$.","date":"2021-08-05 21:17:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5943408012390137, \"perplexity\": 1695.8588756286294}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046157039.99\/warc\/CC-MAIN-20210805193327-20210805223327-00425.warc.gz\"}"}
| null | null |
Enjoying summer, remembering (and still recovering from) winter
July 1, 2015 · CHS Buzz · Administator CHS
By Kevin Hughes, Director of Administration at the CHS
Summer is upon us and warm weather beckons us outdoors. In many respects, February is a distant memory.
February 2015 in Hartford was one for the record books. With an average daily temperature of 16 degrees, it eclipsed the old record of 16.5 degrees set back in 1934. The historical average is a balmy 28 degrees, so it was a stunning 43% colder than the average February as measured since 1904!
Temperature extremes clearly present unusual problems we all have to deal with. Our roads took a beating and our homes and business infrastructure did as well. Here at the Connecticut Historical Society, our challenge was to stay on top of the hazards that these conditions create. Frost heaving and ice damming were constant concerns. This photo shows the top of an 18-foot ice formation that developed – and hung around for what seemed like forever – alongside our book stack wing:
On a warm day in mid-March, it finally tumbled down. Thankfully it didn't hit anyone!
Weathering an extreme winter similar to what just occurred requires constant maintenance of our building's exterior components. It actually starts the summer before with cyclical upkeep tasks such as slate roof repair, gutter and downspout cleaning, and masonry re-pointing. Last summer, we did major restoration work on our seven chimneys that were showing signs of decay after decades of wear and tear. We also recently installed de-icing cables on the front porte-cochere to prevent ice damming and water seepage into the house itself.
This summer season — with a wary eye towards next winter — we will again be doing projects to minimize winter's harshness. At the top of the list we'll replace the 45-year- old roof on our book stack wing. Shortly thereafter, we will resurface our parking lots and driveways. Winter has not been kind to them, causing numerous patches of deterioration and heaving (in addition to the curbing that gets beat up by the snow plows). In fact, the winter season of 2013-14 was actually colder than what we just endured!
Here's hoping for a few 'average' winters in the coming years!
Kevin Hughes is the Director of Administration for the Connecticut Historical Society, a position he has held since 1998. He holds a BA from Assumption College and an MPA from the University of Hartford. He enjoys running, golf, genealogy, and spending time with his family.
About the Connecticut Historical Society
A private, nonprofit, educational organization established in 1825, the Connecticut Historical Society is the state's official historical society and one of the oldest in the nation. Located at 1 Elizabeth Street in Hartford, the CHS houses a museum, library, and the Edgar F. Waterman Research Center that are open to the public and funded by private contributions. The CHS's collection includes more than 4 million manuscripts, graphics, books, artifacts, and other historical materials accessible at our campus and on loan at other organizations.
The CHS collection, programs and exhibits help Connecticut residents connect with each other, have conversations that shape our communities, and make informed decisions based on our past and present.
Like the CHS on Facebook (facebook.com/CTHistoricalSociety) Follow the CHS on Twitter (@CTHistorical)
Tags: CHS, maintenance, summer, winter
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,768
|
20 LISTSIt's Always SunnyLists about the FX/FXX cult comedy series about five incredibly selfish idiots.
The Greatest Running Jokes
Fan Theories That Might Be True
The Creepiest Dennis Moments
Charlie's Greatest Inventions
Dennis Is a Serial Killer
Accidentally Gayest Mac Moments
Photos for Fans
Ranking the Best Episode from Each Season
IASIP Tattoos You'll Want, too
Mac's Best T-Shirts
Famous Guest Stars You Forgot About
Frank's Most Ludicrous Schemes
Frank's Most Absurd Moments
Favorite Supporting Characters
The Best Other Shows That Have Cast Members
The Whole Cast: Then and Now
All of Dennis's Big Ideas, Ranked
All Seasons, Ranked
Times the Gang Acted Like Actual Sociopaths
The Most Low-Key Sociopathic Things The Gang Has Done On 'It's Always Sunny'
Crystal Brackett
Updated January 23, 2020 770 votes 93 voters
List RulesVote up the gang's plots and schemes that truly call into question their sense of right and wrong.
There's no arguing that the central characters of It's Always Sunny in Philadelphia are rude, crude, and downright vile. The shenanigans that Charlie, Dee, Dennis, Frank, and Mac get themselves into just prove this fact over and over again. The gang's self-obsessed and sociopathic actions often snowball out of control, wrapping them up in situations that no human would ever deem to be morally correct or socially appropriate.
Fans of the series can ponder all they want about the intentions or motives of the Paddy's Pub-dwelling misfits, devising a slew of It's Always Sunny fan theories, but the truth of the matter is that they're all way too wrapped up in their own heads to care about what anyone else thinks about them.
Mac And Dennis Eat Their Dog, Dennis Jr.
Photo: 20th Television
Episode: "Mac & Dennis Move to the Suburbs" (Season 11, episode 5)
After moving to the suburbs, Mac and Dennis find themselves desperately bored. In an effort to pull themselves back into an active and exciting lifestyle, Dennis buys a dog for Mac, who names it Dennis Jr.
Regardless of the new dog, Dennis and Mac both start to lose their minds. Eventually, Dennis Jr. perishes and Mac is left with the task of burying it. However, that night over dinner, he informs Dennis he has changed his mac and cheese recipe - hinting that, instead of using farmed meat, he used dog meat from Dennis Jr.
Stone-cold sociopathic?
The Gang Repeatedly Siphons Blood From Frank While He's Asleep
Episode: "Frank Retires" (Season 10, episode 9)
When Frank throws in the towel at the bar, he turns over all his shares to the gang. Frank's firstborn child is in line to take over, but Charlie and Dennis argue over which of them fits the bill. There's a dispute over whether or not Frank is really Charlie's father, but he refuses to take a paternity test to prove who is right.
Desperate to compare the DNA, Charlie and Dennis decide to siphon blood from Frank while he's sleeping. Draining him of nearly all of his vital fluids, they take their sample in for testing. The results are far from acceptable, as the results show the DNA of four humans and one animal.
Charlie And Dee Get A Taste For Human Flesh
Episode: "Mac and Dennis: Manhunters" (Season 4, episode 1)
Frank keeps getting his food stolen and has had enough of the gang taking his meals. When Charlie and Dee decide to chow down on some of Frank's steak, he tells them they have just consumed human meat. Not wanting to believe it was actually from a human, Charlie and Dee's insatiable hunger for the meat lures them out to every corner of Philadelphia to find a comparable taste.
Giving in to their cravings, they settle on the fact that they have eaten human meat and are now cannibals. They attempt to obtain human flesh at the morgue, then decide to eat Rickety Cricket. In the end, Frank admits the steak they ate came from a raccoon, and their hunger is probably from being infected with a tapeworm.
Mac And Dee Paint A Baby Brown To Try To Get It Modeling Jobs
Episode: "The Gang Finds a Dumpster Baby" (Season 3, episode 1)
While bickering outside, Mac and Dennis find a baby in a dumpster. Conflicted about what to do with the foundling, Mac and Dee decide to care for it. They become sick of the baby almost instantly and plan on ditching it on the curbside when someone suggests the child could be a model baby.
Seeing the prime opportunity for a get-rich-quick scheme, they take the child to a modeling agency. However, the agent tells them they are only looking for Hispanic babies. Mac and Dee don't let this stop them, and they attempt to bronze the baby up with everything from tanning beds to brown shoe polish. A social worker eventually intervenes.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,184
|
<!doctype html>
<!--
Minimal Mistakes Jekyll Theme 4.5.1 by Michael Rose
Copyright 2017 Michael Rose - mademistakes.com | @mmistakes
Free for personal and commercial use under the MIT license
https://github.com/mmistakes/minimal-mistakes/blob/master/LICENSE.txt
-->
<html lang="en" class="no-js">
<head>
<meta charset="utf-8">
<!-- begin SEO -->
<title>IPOP</title>
<meta name="description" content="IP-Over-P2P, Open-source User-centric Software Virtual Network">
<meta name="author" content="">
<meta property="og:locale" content="en">
<meta property="og:site_name" content="IPOP">
<meta property="og:title" content="IPOP">
<script type="application/ld+json">
{
"@context" : "http://schema.org",
"@type" : "Person",
"name" : "IPOP",
"url" : null,
"sameAs" : null
}
</script>
<!-- end SEO -->
<link href="/feed.xml" type="application/atom+xml" rel="alternate" title="IPOP Feed">
<!-- http://t.co/dKP3o1e -->
<meta name="HandheldFriendly" content="True">
<meta name="MobileOptimized" content="320">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script>
document.documentElement.className = document.documentElement.className.replace(/\bno-js\b/g, '') + ' js ';
</script>
<!-- For all browsers -->
<link rel="stylesheet" href="/assets/css/main.css">
<!--[if lte IE 9]>
<style>
/* old IE unsupported flexbox fixes */
.greedy-nav .site-title {
padding-right: 3em;
}
.greedy-nav button {
position: absolute;
top: 0;
right: 0;
height: 100%;
}
</style>
<![endif]-->
<meta http-equiv="cleartype" content="on">
<!-- start custom head snippets -->
<!-- insert favicons. use http://realfavicongenerator.net/ -->
<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png">
<link rel="manifest" href="/manifest.json">
<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5">
<meta name="theme-color" content="#ffffff">
<!-- end custom head snippets -->
</head>
<body class="layout--wiki">
<!--[if lt IE 9]>
<div class="notice--danger align-center" style="margin: 0;">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</div>
<![endif]-->
<div class="masthead">
<div class="masthead__inner-wrap">
<div class="masthead__menu">
<nav id="site-nav" class="greedy-nav">
<a class="site-title" href="/">IPOP</a>
<ul class="visible-links">
<li class="masthead__menu-item"><a href="/about/">About</a></li>
<li class="masthead__menu-item"><a href="/wiki/Quick-Start">Quick Start</a></li>
<li class="masthead__menu-item"><a href="/download">Download</a></li>
<li class="masthead__menu-item"><a href="/learn/">Learn</a></li>
<li class="masthead__menu-item"><a href="/wiki/">Wiki</a></li>
<li class="masthead__menu-item"><a href="/whitepaper/">White Paper</a></li>
<li class="masthead__menu-item"><a href="/contribute/">Contribute</a></li>
<li class="masthead__menu-item"><a href="/contact/">Contact</a></li>
</ul>
<button><div class="navicon"></div></button>
<ul class="hidden-links hidden"></ul>
</nav>
</div>
</div>
</div>
<div id="main" role="main">
<article class="page" itemscope itemtype="http://schema.org/CreativeWork">
<div class="page__inner-wrap">
<section class="page__content" itemprop="text">
<aside class="sidebar__right">
<nav class="toc">
<header><h4 class="nav__title"><i class="fa fa-file-text"></i> On This Page</h4></header>
<ul class="section-nav">
<li class="toc-entry toc-h1"><a href="#build-ipop-for-centos">Build IPOP for CentOS</a>
<ul>
<li class="toc-entry toc-h2"><a href="#download-and-install-dependencies">Download and Install Dependencies</a></li>
<li class="toc-entry toc-h2"><a href="#build-ipop">Build IPOP</a></li>
<li class="toc-entry toc-h2"><a href="#copy-configuration-file">Copy Configuration File</a></li>
</ul>
</li>
</ul>
</nav>
</aside>
<h1 id="build-ipop-for-centos">Build IPOP for CentOS</h1>
<p><strong>Warning: This document may be out of date.</strong></p>
<table>
<thead>
<tr>
<th> </th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Tested on</strong></td>
<td>CentOS 7 x64</td>
</tr>
<tr>
<td><strong>Time</strong></td>
<td>~ 10 Minutes</td>
</tr>
<tr>
<td><strong>Question(s)</strong></td>
<td>- How to build IPOP?</td>
</tr>
<tr>
<td><strong>Objective(s)</strong></td>
<td>- Build IPOP Source Code</td>
</tr>
</tbody>
</table>
<h2 id="download-and-install-dependencies">Download and Install Dependencies</h2>
<div class="highlighter-rouge"><pre class="highlight"><code>su
yum -y update
yum install -y centos-release-scl-rh
yum install -y devtoolset-3-gcc-c++ rh-python35
yum install -y git nss-devel openssl-devel
scl enable devtoolset-3 bash
scl enable rh-python35 bash
curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
python get-pip.py
yum install -y python-devel.x86_64
pip install sleekxmpp psutil pystun
</code></pre>
</div>
<h2 id="build-ipop">Build IPOP</h2>
<p>Run the following commands as a regular (non-root) user:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>scl enable rh-python35 bash
scl enable devtoolset-3 bash
mkdir -p ~/workspace/ipop-project ~/workspace/ipop-vpn/config
cd workspace/ipop-project/
git clone https://github.com/ipop-project/Tincan
git clone https://github.com/ipop-project/Controllers
cd Tincan
cd trunk/build/
make
cp -f ../out/release/x64/ipop-tincan ../../../../ipop-vpn/
cd ../../../Controllers
cp -rf ./controller/ ../../ipop-vpn/
</code></pre>
</div>
<h2 id="copy-configuration-file">Copy Configuration File</h2>
<p>You will need a valid configuration file (ipop-config.json) to run IPOP. Go to the directory you have your config file and copy the file to the <code class="highlighter-rouge">config</code> directory:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>cp ipop-config.json ~/workspace/ipop-vpn/config
</code></pre>
</div>
</section>
</div>
</article>
<div class="sidebar">
<nav class="nav__list">
<div class="wiki-top-links">
<a href="../wiki" class="display-unset">Wiki Home</a> / <a href="../wikipages" class="display-unset">Wiki Pages</a>
</div>
<ul>
<li><strong>Deploying IPOP-VPN</strong>
<ul>
<li><a href="Quick-Start">Quick Start</a></li>
<li><a href="Use-IPOP,-Intro">Installation</a></li>
<li>
<table>
<tbody>
<tr>
<td>[[Configuration</td>
<td>Understanding the IPOP Configuration]]</td>
</tr>
</tbody>
</table>
</li>
</ul>
</li>
<li><strong>Development Guide</strong>
<ul>
<li><a href="Development-Workflow">Development Workflow</a></li>
<li><a href="Coding-Guidelines">Coding Guidelines</a></li>
<li><a href="Build-IPOP,-Intro">Building the Code</a></li>
<li><a href="IPOP-Scale-test-Walkthrough">Testing Your Build</a></li>
<li><a href="Controller-Framework">Controller Framework</a></li>
<li><a href="Controller-API">Controller API</a></li>
<li><a href="Build-WebRTC-Libraries,-Intro">Building WebRTC Libraries</a></li>
</ul>
</li>
<li><strong>General Documentation</strong>
<ul>
<li><a href="FAQs">FAQs</a></li>
<li><a href="Troubleshooting">Troubleshooting</a></li>
<li><a href="Planning-Your-Network">Planning Your Network</a></li>
<li><a href="Coding-Challenges">Coding Challenges</a></li>
<li><a href="Known-Issues">Known Issues</a></li>
<li><a href="Getting-Help">Getting Help</a></li>
<li><a href="How-to-Contribute">How to Contribute</a></li>
</ul>
</li>
</ul>
</section>
</div>
</article>
</div>
</nav>
</div>
</div>
<div class="page__footer">
<footer>
<!-- start custom footer snippets -->
<!-- end custom footer snippets -->
<!-- <div class="page__footer-follow">
<ul class="social-icons">
<li><strong>Follow:</strong></li>
-->
<!-- <li><a href="/feed.xml"><i class="fa fa-fw fa-rss-square" aria-hidden="true"></i> Feed</a></li> -->
<!-- </ul>
</div> -->
<div class="page__footer-copyright footer-address">
<div class="float-left">
<img src="/assets/images/uf_small.png" class="padding-bottom-right" /><img src="/assets/images/nsf_small.png" class="padding-bottom-right" />
</div>
<i class="fa fa-address-card-o" aria-hidden="true"></i>
<a href="http://www.acis.ufl.edu" rel="nofollow" target="_blank">ACIS Lab</a>, P.O. Box 116200, 339 Larsen Hall, Gainesville, FL 32611-6200; 352.392.4964<br />
<a href="http://www.ece.ufl.edu/" rel="nofollow" target="_blank">Department of Electrical & Computer Engineering</a><br />
<a href="http://www.eng.ufl.edu/" rel="nofollow" target="_blank">College of Engineering</a>, <a href="http://www.ufl.edu/" rel="nofollow" target="_blank">University of Florida</a>
</div>
<div class="page__footer-copyright footer-links">
<div>
<a href="/contact">Contact</a> | <a href="/contact/#mailing-list-subscription">Mailing List</a> | <a href="https://ipopvpn.slack.com/">Slack Channel</a> | <a href="/sitemap">Sitemap</a><br />
<div>Powered by <a href="http://jekyllrb.com" rel="nofollow" target="_blank">Jekyll</a> & <a href="https://mademistakes.com/work/minimal-mistakes-jekyll-theme/" rel="nofollow" target="_blank">Minimal Mistakes</a><br />
© 2019 IPOP - <a href="http://www.acis.ufl.edu" rel="nofollow" target="_blank">ACIS Lab</a>
</div>
</div>
</div>
<div class="page__footer-copyright footer-sponsor clear-both">This material is based upon work supported in part by the National Science Foundation under Grants No. 1234983, 1339737 and 1527415.</div>
</footer>
</div>
<script src="/assets/js/main.min.js"></script>
</body>
</html>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,031
|
We all know the trends that ruled at one time went off the shelves and are now back again with an intension to stay in the Fashion Industry. What are we talking about? Yes we are talking about the trend of Sequins that has it B town, Vogue, Harpers Bazaar an absolutely every Fashion Savvy Magazine. Also we see a lot of B-town and high street people wearing Sequin this season such as Kim Kardashian, Kylie Jenner, Gigi Hadid (Fashion Queens) have been spotted a lot of times sporting this sequins Trend. Sequins is all about being blingy and eye catching and we all love this trend!! Sequins is about reflected light, so we all its much suitable for the night time may it be a clubbing night or a dinner date: its perfect for both occasions. It's now time to ditch the round-the-clock wardrobe conventions. Wearing Sequins would actually mean we have made efforts to dress up! Dont you all agree with me?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 7,249
|
{"url":"https:\/\/electronics.stackexchange.com\/questions\/405493\/what-is-the-purpose-of-the-op-amp-feedback-circuit","text":"# What is the purpose of the op-amp feedback circuit?\n\nThis is an existing circuit of buck regulator circuit I have been reviewing with.(Here I have not mentioned the part number of the due to the confidentiality issue)\n\nFollowing is the screenshot of the feedback resistor design.\n\nReference voltage of the buck regulator is 1 V and output is designed for 3.3 V . Feedback regulator resistors (51 k\u03a9 , 22.1 k\u03a9) combination design I have reviewed and verified the values.\n\nOpamp feedback path and its functions are confusing sections for me. Following is my understanding, current feedback across the 100 \u03a9 resistor is taken across to the amplifier input and voltage output comes in parallel with the feedback path ( 22.1 k\u03a9) through a buffer.\n\nCan anyone suggest the purpose of the opamp-feedback circuit? And what could be possible output voltages at the buffer (op-amp) output.\n\n\u2022 My first guess is that it's a current limiter. The right opamp measures the voltage across a $100\\Omega$ resistor, amplifies it, and then pulls up the FB pin if it is too large. A large FB will make the regulator think that the output voltage is too high, causing it to source less current. Nov 7, 2018 at 8:31\n\u2022 @SvenB, can you please just give a reference I can read to understand how buck will source less current? Nov 7, 2018 at 8:44\n\u2022 I'd refer you to any introduction about buck regulators if you want to know how they can source less current. YouTube has video's explaining the principles. Nov 7, 2018 at 9:07\n\u2022 100\u03a9 current sense resistor? You sure that's not a 1\u03a9 or 0.1\u03a9? It should be. It looks like an attempt to regulate both voltage and current. Nov 17, 2018 at 17:54\n\u2022 @Misunderstood, good catch. Initially there was a schematic error, Actually it is 0.1 \u03a9 Nov 19, 2018 at 6:54","date":"2022-05-19 07:55:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4002273976802826, \"perplexity\": 1652.9515461778328}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662526009.35\/warc\/CC-MAIN-20220519074217-20220519104217-00457.warc.gz\"}"}
| null | null |
Mappia é um género de plantas pertencente à família Icacinaceae. É originário da América Central. O género foi descrito por Jacq. e publicado em Plantarum Rariorum Horti Caesarei Schoenbrunnensis 1: 22 em 1797. Sua espécie tipo é Mappia racemosa Jacq..
É um género com 5 espécies distribuídas pelo México, América Central e Grandes Antilhas. As espécies asiáticas que anteriormente se incluíam neste género, atualmente são consideradas do género Nothapodytes.
Distribuição
É uma espécie rara que se encontra nos bosques de pinheiro-azinheiras, desde o México até a Nicarágua.
Espécies seleccionadas
Mappia longipes Lundell
Mappia mexicana B.L.Rob. & Greenm.
Mappia multiflora Lundell
Referências
Bibliografia
Gutiérrez Baez, C. 1994. Icacinaceae. Fl. Veracruz 80: 1–16.
Howard, R. A. 1976 [1977]. Flora of Panama, Part VI. Family 106. Icacinaceae. Ann. Missouri Bot. Gard. 63(3): 399–417. View in BotanicusView in Biodiversity Heritage Library
Howard, R. A. 1942. Studies of the Icacinaceae, II. Humirianthera, Leretia, Mappia, and Nothapodytes, valid genera of the Icacineae. J. Arnold Arbor. 23(1): 55–78.
Howard, R. A. 2001. Icacinaceae. In Flora de Nicaragua. Monogr. Syst. Bot. Missouri Bot. Gard. 85(2): 1156–1157.
Standley, P. C. & J. A. Steyermark 1949. Icacinaceae. In Standley, P.C. & Steyermark, J.A. (Eds), Flora of Guatemala - Part VI. Fieldiana, Bot. 24(6): 225–229.
Stevens, W. D., C. Ulloa U., A. Pool & O. M. Montiel 2001. Flora de Nicaragua. Monogr. Syst. Bot. Missouri Bot. Gard. 85: i–xlii, 1–2666.
Ligações externas
Icacinaceae em APWeb
Icacinaceae
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,006
|
Finalists were announced today in The 13th Annual American Business Awards. Lists of Finalists by category have been published at http://www.StevieAwards.com/ABA.
This year's Gold, Silver, and Bronze Stevie® Award winners will be announced at two banquets. The first will take place on June 22 at the Fairmont Millennium Park Hotel in Chicago, Illinois; and the second, with awards in the new product and technology-related categories, will be held on September 11 at the Julia Morgan Ballroom at The Merchants Exchange Building in San Francisco, California. More than 800 executives are expected to attend the events. The ceremonies will be broadcast nationwide by the BizTalk Radio Network.
Among the organizations with multiple Finalists are Accenture, Actiontec Electronics, AT&T, BMO Capital Markets, C-4 Analytics, Callidus Software, CAN Capital, Capital One Investing, CD2 Learning, Cigna, Cisco Systems, The Control Group, CyraCom International, Dell Inc., DEVENEY, Engility, Farbman Group, Fareportal, George P. Johnson, Guitar Center, Hewlett Packard, Information Builders, Isagenix International, iTalent Corporation, Jasper, Jeunesse Global, Level 3 Communications, LIMU, Makovsky & Co., Marriott Vacations Worldwide, Merkle, MWW, The Navicor Group, ORBCOMM, Pacific Life, Quality Systems, Inc., Relayware, Roth Staffing Companies, SoftPro, TangoMe, Tata Consultancy Services, Thomson Reuters, Toshiba America Business Solutions, TriNet Group, Inc., U.S. Green Building Council, United Credit Consultants, USANA Health Sciences, Vectra Networks, Virtusa Corporation, Walker Sands Communications, and Workplace Answers.
Finalists were chosen by scores of business professionals nationwide during first-round judging in April and May. Members of specialized final judging committees will determine Gold, Silver, and Bronze Stevie Award placements from among Finalists in judging that will begin later this month.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,399
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.